I could do that remotely, but I’d need an extra machine for that and that doesn’t make sense: My shiny new main PC has 8 cores/16 threads and 128GB of memory so I can do heavy work (mostly data analysis) on my main PC. Now, if virtualization was working on Haiku, I’d be able to do all the heavy lifting on a Linux VM and interact with it through SSH… but VM’s are not implemented either.
Haiku really has the best desktop experience, I’d much rather get it all running natively on Haiku.
For now, I’m stuck with running Haiku in a VM and connecting to the host. Which works… but it’s not ideal.
Neither Linux as a desktop nor Windows as a server has taken off for a good reason. Making Haiku worse as a desktop so it’d be better as a server doesn’t make sense in my opinion.
I 100% agree.
For my taste there are too many tries to push Haiku into a fundamentally different directions recently.
Some weeks ago there was “Haiku on phones”,today it’s “Haiku server”,and I really don’t understand it.
Haikus stated goal is being the perfect desktop OS and that’s exactly what it is.
It makes no sense to put an OS onto a server which has a graphical UI so deeply built into the system.
It may be helpful while developing if you can run a local development server directly on Haiku,but that already works.
For larger scale server deployments,please,just use FreeBSD,OmniOS or any other Unix thingy that was made especially for this job.
Linux as a desktop may not have taken off, but it has a fine desktop experience for the large group of people who use it.
Windows as a server might not be the most popular but it’s still being used a lot to great success.
Workstation use is (very, very) different than server usage: It requires a good desktop experience with the ability to run heavy workloads as well. Linux fits my needs and I have no issues doing my work on a Linux desktop. I do prefer the Haiku desktop experience, and since I run most of my workload through a command-line (which Haiku supports), Haiku has the potential to be my perfect desktop.
I think Haiku should offer a simple, minimalistic (pass-wordless, single user) out of the box experience as it does right now. Haiku works great as an embedded desktop experience and it’s very modular. If someone can come up with a way to make the application sandboxing experience better than the hippy 70’s UNIX way, I’m all for it. But since Haiku currently theoretically already implements a multi-user environment, unlocking it for advanced users is not going to hurt anyone and it’s not suddenly going to be forced on all users.
Please stop thinking yours is the only use case for Haiku.
I wonder what interests you about Haiku then. You want compatibility with software that assumes Unix permissions. If that compability is built in, what will end up happening is Haiku’s ux becoming identitcal to a Linux distro’s, since people would be too complacent to make it work with default single user behavior.
I think Haiku appeals to people who definitely don’t want that. Otherwise it’d just be a curiosity for people looking for something “different” for its own sake.
I think I might have mentioned already that the Haiku desktop experience is what appeals to me most. My analysis software is custom made and can easily be run on top of any operating system that supports compiling and running C code. (this includes Haiku) For me, Haiku is some small pieces away from being a perfect workstation system. I wonder why you don’t want to accept that anyone can have a different use case than yourself?
So, Haiku already has the bits and pieces in place. All that’s missing is an (optional!!!) ability to secure user logins with a password, be able to switch user accounts, lock the screen and hopefully run multiple instances of app_server. This will not force anything on users that have no use for this… but it would make Haiku more capable as a family operating system.
With recent Web Browser, Vulkan and Wine developments, it seems likely that will eventually even be able to play some Windows games on Haiku. Having Haiku installed on a family computer is not far fetched. That’s not my primary use case, but I understand that having multiple computers is not financially feasible for just anyone.
This is great. My primary interest lies in being able to run isolated applications and services outside of my user environment. I’m going to do some research to see if I can get my work environment up and running on top of Haiku, because it seems like not much else is needed for me to use my computer the way I like.
Application data security, should be the responsibility of the application. The OS by default should probably always sandbox application memory and disk access and 9nky allow to shared l library ln read only.
Shoehorning the data security into the os is not a good solution.
If you have a webserver, by default the os should be sandboxing it, the application itself should also be secure design
This tool looks so nice compared to similar tools on other OS where you have to select each user in a pane and change options. Just the data required laid out in a table, simple and intuitive - very much the haiku way!
I dont agree.
I used Debian, Ubuntu, Manjaro, Arch, Fedora, FreeBSD and OpenBSD on the same bog simple Lenovo ThinkPad and ThinkCentre (not distrohopping, but months long daily usage). It was a terrible experience and i still have PTSD if i recall that time. I am not only talking about program bugs or inconsistency, but about actual system bugs too, where the mouse cursor goes teansparent, where window decorators goes missing, gui crash and other friendly unixy errors made up my days, and i havent did anything fancy, i only used it for web/music/movies.
Sorry, but i dont like ductape sandcastles and i am wholehartedly against introducing anymore nixism without a good reason to Haiku. We had enough pain with nixes, it should die already. The eternal ‘70s must end, otherwise we are all damned.
Whatever Haiku does, it needs to do it better than the status quo. Robust security features (for login/access at least) speak to the maturity of the OS, and not only the utility of this but also the user perception should not be under-estimated.
Further implementation of multiuser access (both command line and maybe even UI), permissions, etc should be elegant, sophisticated and make sense to the user. But these can evolve as the needs become clear, especially as we seem to have many of the building blocks at hand.
Sandboxing application file access has potential drawbacks - if we’re keeping the filesystem metaphor then sandboxing file access immediately destroys data organisation. See: sandboxed applications on linux desktops saving to /home/user/nameofpackagemanager/748327absolute498234_47832gibberish8_nameofapp/0/filegoeshere and other such stupidity. Usually, I want all my photos in one place, not my photoshop photos here, my GIMP photos there…
Sandboxes have plenty of valid use cases, but sandboxing files for desktop apps is a difficult model, especially as a default. What would be preferable is elective sandboxing. E.G. Some kind of package flag that says “0ad does not need to be able to access personal files, sandbox it’s storage to /home/username/savegames/0ad” - plus some sort of contextual menu option to say “I’m going to some suspect website, let’s jail WebPositive for this one runtime”.
Provide the API feature for sandboxing of files at the application level.
Lets examine the use case for file security broadly. What types of data need security ??? The answer to that is simple, anything that puts the user in the way of economic harm.
Take QuickBooks for example I want my data in QuickBooks encrypted and segregated from all other user files
Family pictures, these are digital copys, why do they need protection ?
If you’re a digital artist, having your work secure in a sandbox of encryption is smart.
That doesn’t mean that applications can’t share the folders home, system etc, but the data therin if it warrants protection should be protected.
A API for application level security and encryption is far more robust, reduces os complexity and dumps the mainframe data sharing model.
The answer to that is simple, anything that puts the user in the way of economic harm.
That’s a very narrow answer. I would also argue, anything that could lead to reputational or physical harm to a user as well. Family pictures don’t need protection? It depends on your family. If you’ve formed a family outside your faith, race, or “expected sexual orientation”, those photos are potentially deeply dangerous if released.
API-based sandboxing helps a trusted application protect it’s own data, presumably using a system-level keystore of some kind. It doesn’t help protect against an application going rogue on the rest of your system if it gets highjacked.
Well, there’s many layers to data security, but basically, the user is always the greatest risk truth be told.
The real “security” rationale for multiuser, was when you have a terminal/mainframe and you needed to prevent non linear data editing from occuring. Because if the data editing was concurrent across multiple users the data would be corrupted etc. So controlling when/who edited data, you kept the files from being corrupted.
It was never intended or broadly used as a security tool to prevent data theft.
I still stand on my view that data security should be handled inside the application. If every application can generate it’s own encryption, then you’ve dramatically increased the safety of the data in total. So lets say someone gets a hold of your laptop. Immediately they remove the drive, read it content. Then then have to separately decrypt the data for each application, all of which can use different algorithms and keys.
Difficulty factor just went way way up
As to web security, maybe it’s time to look at virtualization of the web browser. There are hardware features that would make malware injection far more difficult and keep the computer more secure. There’s probably a very solid argument to do this.
At least please do not make obligatory password. It is annoying on Linux, I usually set it to “1”. For my use cases physical PC protection is enough I think.
Of course the most secure way is the air firewall and never turning your computer on.
The least might be to have your home drive be public in the first place.
Haiku is somewhere inbetween, there are severall threats to consider here:
accidental destruction of data
accidental leakage of data
malicious destruction of data
theft of data
All these problems are not easy to solve, and mulitiuser is mostly unrelated to all of them, the biggest points of failure nowadays are untrusted software, not a second untrusted user.
It could however be one step in the right direction. If we assume that we have an api that allows the application to receive a file or a copy (or instance) of it to work on, and an api to send it back we can then start doing something similar to OpenBSD (pledge): the application declares that it will never touch the filesystem itself, and if it does it will be killed by the OS.
Now, on OpenBSD this is done via system calls after startup, this is a good idea already, i think we can however build on this by adding a set of restrictions right into the application ressource.
In this case the api would bring up say tracker and the OS saves the file in a sane place (and probably records what application it was that requested the save)
this way there is no horrible internal sandbox paths, only proper ones.
This does mean apps have to explicitly support this api, which i think is fine to do.