HAIKU for those of us with disabilities

I’m not anywhere near being a developer, sometimes I wonder what kind of user I am… but I’m excited to hear about this new operating system.

What is the likelihood that “pieces” will be included so that those of us who need alternative input/output will be able to use HAIKU?

I don’t care for Microsoft, but so far… this is the only operating system I can go for voice recognition software that allows me to both dictate and manipulate the computer.

You’ll have to stick it out with Microsoft :frowning:

Well, I myself actually like using Windows XP - which runs pretty good.

Linux might be another option to look into.

I don’t think Haiku will focus on making it easier for disabled people.

This could change in about 3 years time or so.

Since it is open source, a developer could always add/integrate voice recognition into Haiku, but isn’t a priority.

Haiku has some catching up to do (ie: Linux is further ahead).

I’ve definitely faced WORSE things than sticking with Microsoft XP, but it doesn’t cost anything to ask!

I just figured that it would be easier to later plug-in accessibility features if you had time to think about it early on.

Given the modularity of Haiku (and its mission of being a powerful, yet so-simple-your-mom-could-use-it desktop OS), it wouldn’t surprise me if accessibility features were integrated into the system. But no doubt this would be after R1 is released.

I suggest posting to the Glass Elevator mailing list; that’s where post-R1 ideas are thrown around.

Time for a stupid question: How do I do this and where is it?

(I don’t think I mentioned how basic a user I am!)

One thing we can’t remove from windows is it’s entire keyboard navigability. Though M$ seems to forget to do that for new stuff. Still, being able to move windows without the mouse is a good point.
Now, I’m sure Linux has drivers for alternative input devices, including braille printers. Being open source helps a lot there when you’re not the top seller.

As for Haiku, I think the integrated input-method API should make it possible to include a voice-recognition system that would work out of the box. I meant to do something alike once, and actually ported CMU Sphynx II, but it wasn’t stable enough to proceed any further.
OTH, voice reading should be doable via the scripting mechanism. Apps can ask other app which TextView is active, what’s it’s text, selection and cursor position…

Sorry, I’m bad about that. :wink:

Anyway, here’s the Glass Elevator page:

There’s the mailing list, the forum, and some other things I’ve never seen before.

I have a few disabilities as it were. One is called Irlen Syndrome. This basically translates to “I hate bright stuff”.

I have for almost 6 years now hoped and waited for full UI color control so that I can make my UI nice and dark. I can’t stand being blasted by white backgrounds. Unfortunately my disability is just one of many and it’s a pretty huge task to try to design and enable a full system to accommodate such things.

UI control would probably be a bit more feasible then high quality speech engine and integration for accessibility, but it will still probably not happen if not for a very long time. I remember in Dano when you could change the menus color below the combined 333 value. I was happy but it didn’t change everything I needed and then the fonts looked horrible because there was no color control and the antialiasing was still trying to blend to a white background.

Things have improved a bit with Zeta, most of it is now usable with a black background, including Tracker. Still, external apps aren’t always happy about it.