Ability to kill an application


#1

I’ve been working on getting Ghidra to work on Haiku. I was using vim, and decided to try IntelliJ. This was a bad idea, because IntelliJ managed to lock up my system. I was running Haiku on bare metal, with an 6th gen i7 and 16 GB RAM from a USB 3 thumb drive. Yet I couldn’t open any other applications, or even open a Tracker window. I figured I’d let it do its thing and come back after dinner.

A few hours later, and no progress. I decided to reboot, but the java managed to cancel by reboot request. I also opened a new terminal tab and tried to use the kill command to stop IntelliJ, but this had no effect. I feel that the OS and user should be able to kill misbehaving applications, but in Haiku, the bad application was unstoppable until I pressed and held the power button to force a hard shutdown. Was there a better way to kill the application? Does Haiku kill bad applications Linux-style, or let bad applications wreak havoc on the system like Classic Mac OS?


#2

Java doesn’t have any facilities to prevent Haiku from rebooting AFAIK… so it’s probable that the system was just locked up in some way. Could be the USB drive couldn’t handle all the IO of what you were doing adequately… USB drives are not really ideal to run large heavy applications from they just don’t have the IOP performance. Haiku did just get NVMe SSD read support today though hopefully write support soon! (Thanks @waddlesplash!).

If you CTRL+ALT+DEL you get a task killer, also you can kill things from process manager in the tray by default.

Haiku does it’s own thing memory allocation wise… if you run out of memory it just fails to allocate more. This may not be what Java expects in many cases if you are running an application that likes to over commit. This means that the system should typically remain functional enough to remedy issues at max memory usage. Also there is swap support.

I would suggest installing on a real disk (a SATA SSD is reccomended, but and HDD should be fine too) or even running in a VM over running from USB.


#3

Not really sure on latest revisions, but while testing mtp usb connections on a VM (with the libusb bugs that we know and ticket’ed), the process was unkillable, unstoppable hanging, till i switched usb mode on the phone (virtual unplug & plug, i guess).

So… nor java related, not memory full error (mtp lib barely uses a few mb).


#4

That’s a bug in the USB stack that causes system calls to lock forever in a non-interruptible way.

The easiest way to kill applications on a struggling system is probably the “Vulcan Death Grip”. Hold control, alt and shift and click the application in DeskBar. This does not need to start any new app so it is always easy to access.


#5

I know that running Haiku off the USB drive isn’t the best, and I do plan to get another SSD to manage by storage. Soon my machine will triple-boot!


#6

Pro tip: Vulcan death grip

Right-Ctrl + Right-Alt + Right-Shift + Click on running Application in the Deskbar.

It’s our kill -9 in the GUI :slight_smile:


#7

Classic Mac OS didn’t have proper process isolation, hence the havoc. Haiku does; it’s impossible for a program to intentionally deadlock the whole OS.

That being said, if it requests the kernel to do something, it’s possible for there to be a kernel bug and everything to lock up on account of that. If you couldn’t even open a new Tracker window, but you could open a new Terminal tab, this sounds like something related to ports was locked up…

If doing the “Vulcan death grip” does not work, please drop into KDL via Alt+PrntScrn+D, locate the Ghidra threads, and run a backtrace on whichever ones are not listed as “active”. This will locate where the deadlock is. Then you can open a ticket, and we’ll investigate.


#8

Hold my beer, not that I drink beer, but hold it anyway… fork bombs away… etc… obviously it isn’t quite as dramatic as most other OSs, however it can render Haiku pretty much unusably locked up for all practical intents. Note I actually tried this and closed out the terminal I did it in expecting all the forks to close… and nope it still kept on going even after the terminal was gone.

On linux the X server could get pushed out of memory before the OOM killer could kick in and you’d be swapping for days… thankfully it isn’t that bad.


#9

Haiku shoudnt be bring down because a programming bug. At least the kernel should be always up to the task.
You can tell: it isn’t right right now, but technically, a user should not bring everything down, so what you can see is a misbehaviour, and you should open a ticket for it.
Nobody want to have a problematic kernel.
So create a ticket for it!


#10

A forkbomb is probably not cause for ticket creating… it is however evidence that while @waddleplash’s statement is certainly very true in spirit it isn’t true absolutely. Not even from the application standpoint… let alone poking some bugs in the kernel. It’s just a tiny example of why that statement is unrealistic.

Linux combats the issue with process limits but that may not be how Haiku would do it I’m not sure. Process limits wouldn’t prevent other methods of exhausting memory without recovery.

A forkbomb consumes both CPU and memory by forking continually consuming CPU, and each new process consumes memory, so any application that consumes lots of memory can cause a similar effect by exhausting memory… which is why I chose a forkbomb as a simple example. Amazingly most of Haiku’s GUI still works at this point but there really isn’t enough memory for some things to work correctly… you can almost certainly go to KDL and kill processes to get it to working though since that doesn’t require dynamically allocated memory. That isn’t very user friendly though to do.

A real world case example wiht a legitimate application that might cause this is an application with a memory leak… eventually it will exhaust memory and that’s where you end up.

Perhaps one way around this is for some special applications such as Deskbar / Tracker / the init system etc… to allocate from a slightly larger pool of reserved memory than all the other applications such that they continue to work even when the rest of available memory is exhausted. I think that should work even if memory was very badly fragmented since it would mean you could have a large reserved chunk free. Unless some application abused Tracker/Deskbar etc to allocate in that memory area and exhaust it also.

Obviously not that big of a deal if you only download curated software from the repos lol


#11

Because the point of forking is that there is now a new process independent of the old one which will survive if it is killed. Closing the terminal only kills the parent process. This is the expected POSIX-compliant behavior.

It’s possible to kill entire process trees, but I don’t know if we have any tools readily available for that.

At least here forkbombs just seem to just hit OOM after not too long, and then return errors. That’s problematic, but it certainly doesn’t kill background processes or cause other such issues like it does on Linux. So we are already doing well here.


#12

Memory fragmentation is only a problem on (1) 32-bit systems (address space exhaustion), and (2) for drivers that need physically contiguous buffers. That’s it. If there’s free memory, the kernel can map it into a virtual address space, no matter how discontiguous it is.


#13

True, most Haiku users are probably using 32bit Haiku however I guess… though. Probably newer users may be gravitating toward 64bit though.


#14

Agreed it’s way better… in my case it did make the system inconventient to recover however… the UI was pretty broken at that point.


#15

Haiku is a big improvement over BeOS in this field, I still remember how easy it was to render the whole system stalled with bugous programs and drivers.

I was amused to see how long Haiku able to function after I mistakenly overwritten its whole partition with zeroes while it was running.


#16

i once zeroed sector zero of my disk but was able to fdisk it right data thanks to the my system (linux) hold partition table on some files (dont’ remember if they were in /sys or /dev)


#17

That’s normal… Linux doesn’t reread the partition table unless you force it to. Even in the past rereading it often didn’t update everything so it was often easier to just reboot.

Haiku’s solution to this is for most disk management to provide a GUI… so you are less likely to end up in this situation, still if you are using dd the potential is there.