TranslateProject/sources/talk/The history of Android/14 - The history of Android.md

11 KiB
Raw Blame History

The history of Android

Voice Actions—a supercomputer in your pocket

In August 2010, a new feature “Voice Actions" launched in the Android Market as part of the Voice Search app. Voice Actions allowed users to issue voice commands to their phone, and Android would try to interpret them and do something smart. Something like "Navigate to [address]" would fire up Google Maps and start turn-by-turn navigation to your stated destination. You could also send texts or e-mails, make a call, open a Website, get directions, or view a location on a map—all just by speaking.

youtube视频地址

Voice Actions was the culmination of a new app design philosophy for Google. Voice Actions was the most advanced voice control software for its time, and the secret was that Google wasnt doing any computing on the device. In general, voice recognition was very CPU intensive. In fact, many voice recognition programs still have a “speed versus accuracy" setting, where users can choose how long they are willing to wait for the voice recognition algorithms to work—more CPU power means better accuracy.

Googles innovation was not bothering to do the voice recognition computing on the phones limited processor. When a command was spoken, the users voice was packaged up and shipped out over the Internet to Googles cloud servers. There, Googles farm of supercomputers pored over the message, interpreted it, and shipped it back to the phone. It was a long journey, but the Internet was finally fast enough to accomplish something like this in a second or two.

Many people throw the phrase “cloud computing" around to mean “anything that is stored on a server," but this was actual cloud computing. Google was doing hardcore compute operations in the cloud, and because it is throwing a ridiculous amount of CPU power at the problem, the only limit to the voice recognition accuracy is the algorithms themselves. The software didn't need to be individually “trained" by each user, because everyone who used Voice Actions was training it all the time. Using the power of the Internet, Android put a supercomputer in your pocket, and, compared to existing solutions, moving the voice recognition workload from a pocket-sized computer to a room-sized computer greatly increased accuracy.

Voice recognition had been a project of Googles for some time, and it all started with an 800 number. 1-800-GOOG-411 was a free phone information service that Google launched in April 2007. It worked just like 411 information services had for years—users could call the number and ask for a phone book lookup—but Google offered it for free. No humans were involved in the lookup process, the 411 service was powered by voice recognition and a text-to-speech engine. Voice Actions was only possible after three years of the public teaching Google how to hear.

Voice recognition was a great example of Googles extremely long-term thinking—the company wasn't afraid to invest in a project that wouldnt become a commercial product for several years. Today, voice recognition powers products all across Google. Its used for voice input in the Google Search app, Androids voice typing, and on Google.com. Its also the primary input interface for Google Glass and Android Wear.

The company even uses it beyond input. Google's voice recognition technology is used to transcribe YouTube videos, which powers automatic closed captioning for the hearing impaired. The transcription is even indexed by Google, so you can search for words that were said in the video. Voice is the future of many products, and this long-term planning has led Google to be one of the few major tech companies with an in-house voice recognition service. Most other voice recognition products, like Apples Siri and Samsung devices, are forced to use—and pay a license fee for—voice recognition from Nuance.

With the computer hearing system up and running, Google is applying this strategy to computer vision next. That's why things like Google Goggles, Google Image Search, and Project Tango exist. Just like the days of GOOG-411, these projects are in the early stages. When Google's robot division gets off the ground with a real robot, it will need to see and hear, and Google's computer vision and hearing projects will likely give the company a head start.

The Nexus S, the first Nexus phone made by Samsung. The Nexus S, the first Nexus phone made by Samsung.

Android 2.3 Gingerbread—the first major UI overhaul

Gingerbread was released in December 2010, a whopping seven months after the release of 2.2. The wait was worth it, though, as Android 2.3 changed just about every screen in the OS. It was the first major overhaul since the initial formation of Android in version 0.9. 2.3 would kick off a series of continual revamps in an attempt to turn Android from an ugly duckling into something that was capable of holding its own—aesthetically—against the iPhone.

And speaking of Apple, six months earlier, the company released the iPhone 4 and iOS 4, which added multitasking and Facetime video chat. Microsoft was finally back in the game, too. The company jumped into the modern smartphone era with the launch of Windows Phone 7 in November 2010.

Android 2.3 focused a lot on the interface design, but with no direction or design documents, many apps ended up getting a new bespoke theme. Some apps went with a flatter, darker theme, some used a gradient-filled, bubbly dark theme, and others went with a high-contrast white and green look. While it wasn't cohesive, Gingerbread accomplished the goal of modernizing nearly every part of the OS. It was a good thing, too, because the next phone version of Android wouldnt arrive until nearly a year later.

Gingerbreads launch device was the Nexus S, Googles second flagship device and the first Nexus manufactured by Samsung. While today we are used to new CPU models every year, back then that wasn't the case. The Nexus S had a 1GHz Cortex A8 processor, just like the Nexus One. The GPU was slightly faster, and that was it in the speed department. It was a little bigger than the Nexus One, with a 4-inch, 800×480 AMOLED display.

Spec wise, the Nexus S might seem like a tame upgrade, but it was actually home to a lot of firsts for Android. The Nexus S was Googles first flagship to shun a MicroSD slot, shipping with 16GB on-board memory. The Nexus One had only 512MB of storage, but it had a MicroSD slot. Removing the SD slot simplified storage management for users—there was just one pool now—but hurt expandability for power users. It was also Google's first phone to have NFC, a special chip in the back of the phone that could transfer information when touched to another NFC chip. For now, the Nexus S could only read NFC tags—it couldn't send data.

Thanks to some upgrades in Gingerbread, the Nexus S was one of the first Android phones to ship without a hardware D-Pad or trackball. The Nexus S was now down to just the power, volume, and the four navigation buttons. The Nexus S was also a precursor to the crazy curved-screen phones of today, as Samsung outfitted the Nexus S with a piece of slightly curved glass.

Gingerbread changed the status bar and wallpaper, and it added a bunch of new icons. Gingerbread changed the status bar and wallpaper, and it added a bunch of new icons. Photo by Ron Amadeo

An upgraded "Nexus" live wallpaper was released as an exclusive addition to the Nexus S. It was basically the same idea as the Nexus One version, with its animated streaks of light. On the Nexus S, the "grid" design was removed and replaced with a wavy blue/gray background. The dock at the bottom was given square corners and colored icons.

The new notification panel and menu. The new notification panel and menu. Photo by Ron Amadeo

The status bar was finally overhauled from the version that first debuted in 0.9. The bar was changed from a white gradient to flat black, and all the icons were redrawn in gray and green. Just about everything looked crisper and more modern thanks to the sharp-angled icon design and higher resolution. The strangest decisions were probably the removal of the time period from the status bar clock and the confusing shade of gray that was used for the signal bars. Despite gray being used for many status bar icons, and there being four gray bars in the above screenshot, Android was actually indicating no cellular signal. Green bars would indicate a signal, gray bars indicated “empty" signal slots.

The green status bar icons in Gingerbread also doubled as a status indicator of network connectivity. If you had a working connection to Google's servers, the icons would be green, if there was no connection to Google, the icons turned white. This let you easily identify the connectivity status of your connection while you were out and about.

The notification panel was changed from the aging Android 1.5 design. Again, we saw a UI piece that changed from a light theme to a dark theme, getting a dark gray header, black background, and black-on-gray text.

The menu was darkened too, changing from a white background to a black one with a slight transparency. The contrast between the menu icons and the background wasnt as strong as it should be, because the gray icons are the same color as they were on the white background. Requiring a color change would mean every developer would have to make new icons, so Google went with the preexisting gray color on black. This was a change at the system level, so this new menu would show up in every app.


Ron Amadeo

Ron Amadeo / Ron is the Reviews Editor at Ars Technica, where he specializes in Android OS and Google products. He is always on the hunt for a new gadget and loves to rip things apart to see how they work.

@RonAmadeo


via: http://arstechnica.com/gadgets/2014/06/building-android-a-40000-word-history-of-googles-mobile-os/14/

译者:译者ID 校对:校对者ID

本文由 LCTT 原创翻译,Linux中国 荣誉推出