New home-made cross-OS x265 benchmark

I’ve made a series of encoding as an x265 benchmark between my current two OSs: linux and Haiku.

https://forum.doom9.org/showthread.php?p=1964214#post1964214

1 Like

I don’t like these comparisons because they don’t say anything about the actual benefit. Everyone has their own views and expectations here. In addition, the areas of application are very different. For my part, I tried out Linux, but at some point I came to the conclusion that the system was too extensive, too unclearly structured and, with the 1000s of distributions sent, where everyone does what they want, is simply too erratic and not very constant (e.g. Ubuntu, the always turning everything upside down. Now Gnome, then KDE, then…). No thank you.

I use Windows for games and office work, because every game and software that was created for the respective system runs there.

The same applies to Haiku, there are no 1000s of distributions, it is structured uniformly, there is only one package management, it is lean and you install it without having installed 3 pieces of any kind of software. I also like the simple design of the system, which is always considered old school, better than others, because for me the operating system is a platform from which I use what I need and not a multimedia, inflated, colorful, three-dimensional, today mostly already handy like, Blasbalk which is intended more to look at and slow down the system than as a platform.

As you indicated Haiku is still in beta. This means that code isn’t optimized. This is even more true for the nightly that you used where a lot of debugging stuff is activated.

1 Like

Well in the case of encoding a video I’m fairly sure everyone has the same expectations:

  • You want it to correctly encode the video
  • You want it to be as fast as possible

This quick benchmark shows we have quite some work to do on the second point. Indeed it would be interesting to check the results on a beta release and not a nightly, where there is a little more optimization.

Then the next step is on our side, finding why there is such a large difference and try to improve performance.

3 Likes

I see my comparison could have been more fair.
Is there a way I can install a kernel without debug features before the release of beta4? I could then re-do my run of encodings.
I wouldn’t go back to beta3 which was released in mid 2021 (how to downgrade my system which is already up-to-date?).

You have to compile a haiku.hpkg with KDEBUG_LEVEL lowered to at least 1, if not 0.

2 Likes

The numbers could be better, but they’re really not that bad. Decoding speed is also more noticeable for the average user, than encoding speed.

Will the page

suit my needs for the job?

Yes, start from Building Haiku | Haiku Project and do these steps in order:

I would not work off a USB 3.0 stick when benchmarking x265.

2 Likes

It would be interesting to have an strace of kernel calls for both Haiku and Linux when this is running to see what is happening. I would assume the slow down in Haiku is due to slower IO and possible slower USB3 code, plus of course kernel debugging as mentioned. On Linux it may use io_uring or other async IO syscalls. Besides that there is no logical reason Haiku would be slower on CPU-bound operations, though it would also be good to ensure the same compiler and optimization level was used.

It might also be interesting to encode to a tmpfs or other memory-based file system to remove some aspect of IO code. Haiku still might be slower because Linux has likely been optimized to hell on syscalls and IO operations (even if the IO goes to a memory filesystem.)

With all that said I think Haiku generally wins in perceived UI responsiveness compared to most default Linux set ups, though it would not hurt to optimize as much as we can as well.

6 Likes

Why not?

When I measured usb3 speed, it was enough to run safely the benchmark (ToS is 37.8 GB big).

~> time cp -v /USB_STICK/x265_benchmark/ToS_1920x800_xdither.y4m /dev/null
‘/USB_STICK/x265_benchmark/ToS_1920x800_xdither.y4m’ → ‘/dev/null’

real 5m12,404s
user 0m0,175s
sys 0m28,658s

(made a series of run, then picked the worst time)

Those numbers say my usb3 provides 40596585783 b / 312.404 s = 123.929 MB/s

Finally, my memory stick would become a real bottleneck, should my CPU crunch
17620 frames / 312.404 s = 56.40 fps
which is quite far from reality :slight_smile:

And this is exactly my feeling when using haiku, BUT everyday use cases include brutal cpu power (in a whole range of different meanings) so, as I usually run a bunch of h265 encodings, I like this benchmarking scenario.

1 Like

USB needs CPU power, adds latency. Benchmarking on a RAM disk is usually more interessant.

2 Likes

I doubt it, it would make little sense to do that just for reading and writing data to files. io_uring is a lot of code to write and is useful mainly when you do realtime things. For file access, just use FILE* and fopen, fread, fwrite, you will get buffering on the application side and this will be enough to reduce the system call overhead by a lot.

Time for a silly question:
what is /system/packages/administrative directory for?
It is currently eating 3.7 GB on my disc…

A backup about the states, whith those you can boot an earlier state.
You can however safely delete the states if you dont plan to boot back to them.

mmmmmhhhhh…
As I’m going to build x86_64 binary from a x86_64 environment, do I really need to build also the “buildtools” or are they already available on my OS ?
(I’m concerned in becoming short of disk space)

You need the buildtools, currently the ones provided with the OS are not capable of building the whole OS (specifically they can’t build the BIOS bootloader).

You need to build the buildtools for 64 bit.