DwarFS is a filesystem developed by the user mhx on GitHub
+ [1], which is self-described as "A fast high compression
+ read-only file system for Linux, Windows, and macOS." One of my
+ ideas for blendOS was to layer different packages, and combined
+ with its compression and option to be mounted as a FUSE-based
+ filesystem, it's an appealing option for this use case - blendOS
+ is immutable, so it might as well have some compression.
+
Methodology
+
The datasets being used for this test will be the
+ following:
+
+
25 GB of null data (just 000000000000 in
+ binary)
Data for a 100 million-sided regular polygon; ~29 GB2
+
The current Linux longterm release source (6.6.58
+ [2]); ~1.5 GB
+
For some rough latency testing:
+
+
1000 4 kilobyte files filled with null data (again, just
+ 0000000 in binary)
+
1000 4 kilobyte files filled with random data
+
+
+
All this data should cover both latency and read speed
+ testing for data that compresses differently - extremely
+ compressible files with null data, decently compressible files,
+ and random data which can't be compressed well.
This data is from a very early version of a math
+ demonstration program made by a friend. The example below shows
+ what the data looks like for a 3-sided regular polygon.
+
+
+ 3-sided regular polygon data
+
+
+
+
+
My code can generate up to 25 GB/s. However, it
+ does random writes to my drive, which is much slower.
+ So on one hand, you could say my code is so amazingly fast that
+ current day technologies simply can't keep up. Or you could say
+ that I have no idea how to code for real world scenarios.↩︎
+
+
+
+
+
+
+
diff --git a/blog/benchmarking-dwarfs.md b/blog/benchmarking-dwarfs.md
new file mode 100644
index 0000000..1fca903
--- /dev/null
+++ b/blog/benchmarking-dwarfs.md
@@ -0,0 +1,38 @@
+# Benchmarking and comparing DwarFS
+
+DwarFS is a filesystem developed by the user mhx on GitHub [1], which is self-described as "A fast high compression read-only file system for Linux, Windows, and macOS." One of my ideas for blendOS was to layer different packages, and combined with its compression and option to be mounted as a FUSE-based filesystem, it's an appealing option for this use case - blendOS is immutable, so it might as well have some compression.
+
+## Methodology
+
+The datasets being used for this test will be the following:
+
+- 25 GB of null data (just `000000000000` in binary)
+- 25 GB of random data[^1]
+- Data for a 100 million-sided regular polygon; ~29 GB[^2]
+- The current Linux longterm release source ([6.6.58](https://cdn.kernel.org/pub/linux/kernel/v6.x/linux-6.6.58.tar.xz) [2]); ~1.5 GB
+- For some rough latency testing:
+ - 1000 4 kilobyte files filled with null data (again, just `0000000` in binary)
+ - 1000 4 kilobyte files filled with random data
+
+All this data should cover both latency and read speed testing for data that compresses differently - extremely compressible files with null data, decently compressible files, and random data which can't be compressed well.
+
+## Sources
+
+1.
+2.
+
+## Footnotes
+
+[^1]: This data is from a very early version of a math demonstration program made by a friend. The example below shows what the data looks like for a 3-sided regular polygon.
+
+3-sided regular polygon data
+
+
+
+
+
+[^2]: My code can generate up to 25 GB/s. However, it does random writes to my drive, which is *much* slower. So on one hand, you could say my code is so amazingly fast that current day technologies simply can't keep up. Or you could say that I have no idea how to code for real world scenarios.
diff --git a/blog/blendos.html b/blog/blendos.html
index a0ac692..38e2027 100644
--- a/blog/blendos.html
+++ b/blog/blendos.html
@@ -166,7 +166,7 @@
play in the next post.