I remember reading somewhere that Bond gadgets become true after something like 30 years, however I suspect that this pace of development will be speed up dramatically as time goes on. Some examples off the top of my head: GPS - there's an old Bond movie with a analog 'homing device', weapons on cars, cars that can drive on the road and fly also, 'stealth technology' like in Die Another Day (in development).
Getting back to movie inspired OSs, the guy who designed the amazing gesture interface behind those in 'Minority Report' started a company and has prototype UI in development:
Yes. My post was inspired by a discussion I had with someone wondering who was incredibly passionate about helping ordinary non-computer literate audiences -- e.g. the Read/Write Web Facebook Login Fiasco audience -- understand the consequences of their actions when they use a computer. It struck me as interesting that it was Schematic that was able to take the ideas from a interaction system designed to be watched to one that was designed to be used.
The gesturing interface in MR does not solve the gorilla-arm syndrome problem. Gesture interfaces should be based on slight hand and face movement, maybe with voice cues.
"For years radios had been operated by means of pressing buttons and turning dials; then as the technology became more sophisticated the controls were made touch-sensitive — you merely had to brush the panels with your fingers; now all you had to do was wave your hand in the general direction of the components and hope.
It saved a lot of muscular expenditure of course, but meant that you had to sit infuriatingly still if you wanted to keep listening to the same programme."
Sci-Fi has a great influence in developing new technology, as it's a great source of ideas. Take the moon landing as an example.
To follow on from the quote you gave above, the next stage is thought-control of machines. It's got it's own difficulties too though - it's very difficult to clear your mind to the degree necessary to control things:
I wonder, how difficult would it be to detect whether a user of some sort of brain interface simply is or is not irritated by the particular on-screen thing s/he is looking at?
haha I can easily imagine Steve Jobs dictating commands to his engineers like Captain Picard does on the flight deck:
"I want a device so amazing, so jaw-dropping that Jesus himself would want it. It's got to be like the iPhone only bigger, like the Mac OS X, except even more engaging. Make it so"
On a serious note though, working out the problems of fatigue from gestures is typical of a technology in it's infancy. The problem of RSI from using a computer mouse isn't resolved fully, for example.
I think to a certain extent we already had Movie OS in the early 00s / late 90s when all kinds of apps were skinned. People would make all kinds of baroque skins that looked like they came out of movies, that acted much like this, but you know what? They were cluttered and hard to use.
Who remembers Winamp 3, with its fancy skins[1]. I know I breathed a sigh of relief when iTunes with its simple, yet highly functional, interface came out.
This misses the point. Skinning - in the Winamp model of 'same functionality but different look' - is not the same as effective visual storytelling and usage of animation.
I'm not talking about just how Movie OS looks, but how it behaves as a whole. I agree that sticking a Star Trek skin on Windows 98 reduces usability.
Interestingly enough, the "It's a UNIX operating system" moment from Jurassic Park used real software. It's called fsn: http://en.wikipedia.org/wiki/Fsn
You know, having taking a look at FSN's home page I really don't think I would want file sizes represented in either of the ways it does.
I've been using Eagle Mode for about a week - a zoomable UI - and while it's ugly I really can't praise the file manager highly enough. It is a fantastic improvement over the best conventional file managers, but representing file size visually would ruin it.
http://eaglemode.sourceforge.net/
I just installed Eagle Mode - very interesting indeed. The fact that you can put instructions, documentation etc. right there in situ with the relevant controls is quite interesting.
There was an article at UIST (a user interface conference) making similar points about 15 years ago. Here's one free version of the paper: http://research.sun.com/techrep/1995/abstract-33.html That paper had some actual concrete recommendations for different kinds of animations that were implemented in Self.
Maybe it will become an arms race of escalating cuteness. The OS designer needs something cuter to draw your attention away from the dancing bunny, which leads the dancing bunny malware author to come up with something even cuter than that, etc.
I thought this was a great quote from the Anti-Mac article:
"The desktop metaphor assumes we save training time by taking advantage of the time that users have already invested in learning to operate the traditional office with its paper documents and filing cabinets. But the next generation of users will make their learning investments with computers, and it is counterproductive to give them interfaces based on awkward imitations of obsolete technologies."
It reminds me of a recent (non-computer related) incident in my own life. My son's preschool teachers recently had the students in my son's class do an art project where they made phones out of two pieces of construction paper and a piece of yarn. The two pieces of construction paper were cut to look like a old Bell telephone handset and base. I'm sure some of you can see where this is going... the amusing, to me at least, result was that all of the children followed directions and built their project, but couldn't identify what it was. Most of them had never seen telephones with a cord and couldn't figure out how an object too big to fit in your pocket was supposed to be a phone. :-)
I think this illustrates a long term problem with computer interfaces that mimic real world objects and/or interactions, even in the simple case of icons that resemble real-world objects. The real world changes. We haven't had to deal with that much yet, because visual computer user interfaces have only been prevalent so recently. But as time wears on, I think this is only going to get more awkward.
As an example related to my story above, both the iPhone UI and the Android UI use an icon that looks like an old Bell telephone handset to represent the dialer. That image still makes sense to my generation, but IMHO to my kids it's just gonna look like a wierd abstract symbol. Similarly, many applications use an icon that looks like a Rolodex to represent the "phonebook" application. Heck, I think I've got co-workers who've never seen a Rolodex!.
Just as floppy has become the icon for Save buttons. You and I have used floppies, so it makes a fair amount of sense, but many people will never see a real one.
I guess the thing may be that movies have the luxury of make-believe; that is, they have absolute freedom with these interfaces because as someone else pointed out, they don't actually have to function. Sure, their dialogs and warning messages are great to convey a single important message, but what about someone who needs to do something on the computer that reports the "lockdown" status of an area? And do you really want it displayed with a huge green bar every time you get into something that was previously protected? Aren't there some situations where discretion is preferable?
In the milder example of the email flying off the screen, who wants to sit around and wait for a few seconds watching these animations in everyday use? Sure, they're a neat one-off or demo, but when you have a stack of emails to pop off, nobody wants to sit around and wait for that cute little animation to finish. You can probably toggle it, but after the first few times, who would leave it on? It seems like something that would just make grandma's computer usage even slower.
I do think that we could probably look at movies more and try to cherry-pick some cool ideas for dialogs or animations, but in general movie widgets, like everything else in the movies, are larger-than-life. They wouldn't work if you tried to deploy them as a real usable UI.
"And do you really want it displayed with a huge green bar every time you get into something that was previously protected? Aren't there some situations where discretion is preferable?"
Well - we won't know without trying. But it's clear that in the Read/Write Web Facebook case, a bunch of people had thought that they had logged in to Facebook when in fact they hadn't. And that they didn't even know where they were.
You may not want it every time. But at the moment, we don't even have it at all. That's restrictive in the kind of messages we can send out.
For emails flying off the screen: the execution is in the detail. A few seconds may be too long. For some people, it may not even be long enough. But the animation isn't necessarily just showing that an email is being sent: it can convey a lot of other information in a clear, yet subtle way. It's interesting that there's no visual cue when an email is sent that distinguishes between sending to a single individual or a group of people, or an even larger group of people.
It's interesting that there's no visual cue when an email is sent that distinguishes between sending to a single individual or a group of people, or an even larger group of people.
...and try to cherry-pick some cool ideas for dialogs...
See, I think that's part of the problem. Dialogs pretend that the computer is something that can engage in conversation. But computers don't understand human and (many) humans don't understand computer. So the illusion that computers can engage in these 'dialogs' only exacerbates the problem by perpetuating miscommunications.
I think what interfaces need to strive for is a different model of how the user is expected to interact with the computer. It used to be that most people worked with creating and editing files, but I think modern use has shifted towards navigating between places -- unfortunately the old UI paradigm of file management is still dominant among a significant portion of modern software.
Well, I meant dialog like "dialog boxes", not like a discourse.
I think the shift represents a change in the primary use of a computer. Early on it was just a thing to let you save and read files written with word processors etc. Now, with the internet, people mostly use it scoot around a global network of files, most of which they have never saved or touched themselves.
It would be interesting to see some truly unique concepts of computer usage. KDE wanted to shake it up with KDE 4 and change the whole paradigm, but they ended up essentially just adding widgets and leaving everything else the same. I don't know if there's just no other logical representation for the way we use our computers (doubtful) or if perhaps nobody can seem to think of it because we're too acclimated to the traditional window-based model.
I know you meant dialog boxes, but dialog boxes have that name for a reason. They usually tell the user something and then ask for a response. The problem is that the computer doesn't really understand what it's asking about, it's just a pre-recorded message. Dialog boxes on a computer are the equivalent of the automated phone menus and just as frustrating (although those are getting better with speech recognition).
I have some ideas that I think are pretty good but I'm not an expert coder so progress is slow. I think windows is mostly the right idea, but the current implementations are crammed with wastefulness of all types such as application windows that take up unnecessary screen space with menus and buttons that are rarely used (this is getting better, but in my opinion the idea of 'applications' is part of the problem).
Another thing is window resizing -- full user control over this is pointless, and a lot of time is wasted resizing windows so you can see things the way you want. Various X Windows versions try to fix this with smart tiling and so on, but I've yet to find one that's really satisfactory.
Another sad thing is that even simple drag and drop, which has been around for ages, is poorly implemented in the majority of programs, if at all.
I think it's nice for a UI to show you things in a way you can respond to on an almost instinctive level, but will it help with UI designers who just don't think?
For my daily work of pounding out code, I'd certainly want none of this going on. I always disable silly OS animations. However, for my 2 kids who instantly 'get' my iPhone and iPad but feel less secure with my MacBook, I think Movie OS is the kind of thing that would convey useful meaning ... until they understood it all and the varnish wears off and they're asking how to disable the Movie so they can work with the OS.
I completely agree - expanding the UI palette lets you use more varied, subtle (or less so) tools in different contexts for different audiences. People who already understand the consequences of their actions may not want/need to be constantly reminded.
Those of us who are perfectly happy to interact with our computers through a command line are a vanishingly small proportion of people who use a computer.
But those of us who dwell in a black screen with white monospaced text and a flashing cursor need to make more human interfaces if we are to make computers more accessible. Movie OS is possibly overkill, but devices like the iPad show promising signs of progress in HCI.
Yes. We need to get other people - people skilled in communicating to large audiences (and these people exist: they just exist as advertisers, graphic novelists, film directors, storyboard artists etc) involved in how we design our interfaces for everyone else, for the people who don't want to use a CLI.
Hmmm - varied
At the moment I close my browser in the same way as I close my editor, with the little cross in top right.
But I close my new intuitive iPad app by stroking the background image to the top left? Or turning the window upside down? And of course it's different for each app.
For what designer interfaces do for you - take a look at all the non-standard, non-rectangular DVD/Mp3 player tools installed by random hardware makers.
To me, many of the issues are the computer telling the user about stuff they shouldn't have to worry about. For example, wouldn't cloud computing solve most of these problems by not giving the responsibility for the problem to the user?
Change your battery
You shouldn't lose your work if the battery fails
Formatting this volume...
People would not do sys. admin themselves like this
Permanent delete
With storage so large nowadays, there is no particular reason to delete something for space reasons.
Disk full
Shouldn't happen, in the cloud it could never happen, you just get charged more. Gmail for example was designed never to get full.
Checking for software
In the cloud it would always be upto. date
Downloading messages progress bar
Well with web apps it's either loaded or it's not.
Of course these are not the only problems, but are the type that are most difficult for users because they are more about sys. admin than getting the task done.
I would like to agree... I can't quite agree with all points
Permanent delete: Believe me, you can fill up anything. Space exists to be filled. Junk multiplies.... XD That said I am looking into deduplicating WORM storage for my own use.
Disk full / charged more: Not an appropriate business model for many customers. It's hard enough to keep a cap on your phone bill where you only have to worry about 2 or 3 factors (time used, time of day, distance). On a disk, god knows what wants to store temporary and permanent files, it's not immediately clear how much space each picture or movie or song file will take...
I disagree, but I can see where you're coming from based on the examples I picked (which were all biased toward operating system maintenance actions, given the source material).
From the point of view of a naive user, there are new types of interactions that can have massive social implications: e.g. changing Facebook relationship status. At the moment, the notification of that status change to the user is pretty minimal. It does not, for example, make clear that that change will be propagated to, say, over 500 people, and all of their contacts.
There may be people who are more concerned about their social standing than whether they've accidentally formatted a volume or not.
Well really the problem with Facebook is that's too complex.
Though I'd agree there are non-sys admin problems that people have, but the worst problems are all sys. admin type ones, viruses, out of space, slowness.
The user-space stuff is nice to have, though the movies tend to have some impressive visualization and lots of data (like floorplans, schematics etc.) in order to show things like that. Facebook could maybe show a picture of your 500 friends and say "sending..." though.
Undo is good, but I can't help but chuckle at the thought of "Whoops! He's not really my boyfriend" going out to 500 acquaintances 20 seconds after the opposite statement.
I remember reading somewhere that Bond gadgets become true after something like 30 years, however I suspect that this pace of development will be speed up dramatically as time goes on. Some examples off the top of my head: GPS - there's an old Bond movie with a analog 'homing device', weapons on cars, cars that can drive on the road and fly also, 'stealth technology' like in Die Another Day (in development).
Getting back to movie inspired OSs, the guy who designed the amazing gesture interface behind those in 'Minority Report' started a company and has prototype UI in development:
http://www.youtube.com/watch?v=NwVBzx0LMNQ
http://www.geek.com/articles/chips/futuristic-minority-repor...