• Tag Archives iPhone
  • The iPhone in Your Pocket Is Worth Millions

    Several years ago, I had a bit of fun estimating how much an iPhone would have cost to make in the 1990s. The impetus was a story making the rounds on the web.

    A journalist had found a full-page newspaper ad from RadioShack dating back to 1991. He was rightly amazed that all 13 of the advertised electronic gadgets – computer, camcorder, answering machine, cordless phone, etc. – were now integrated into a single iPhone. The cost of those 13 gadgets, moreover, summed to more than $3,000. Wow, he enthused, most of us now hold $3,000 worth of electronics in the palm of our hand.

    I saluted the writer’s general thrust but noted that he had wildly underestimated the true worth of our modern handheld computers. In fact, the computing power, data storage capacity, and communications bandwidth of an iPhone in 2014 would have cost at least $3 million back in 1991. He had underestimated the pace of advance by three orders of magnitude (or a factor of 1,000).

    Well, in a recent podcast, our old friend Richard Bennett of High Tech Forum brought up the $3 million iPhone 5 from 2014, so I decided to update the estimate. For the new analysis, I applied the same method to my own iPhone 7, purchased in the fall of 2016 – 25 years after the 1991 RadioShack ad.

    My iPhone 7 has 128 gigabytes (GB) of flash memory, which would have cost around $5.76 million back in 1991. Its A10 processor, which includes a CPU and GPU, has 3.3 billion transistors, running at 2.34 gigahertz (GHz) and delivering roughly 120,000 million instructions per second (MIPS). This amount of computing power would have cost something like $3.6 million back in 1991.

    The iPhone 7 also delivers astonishing communications speed via 4G LTE mobile networks. Peak and average mobile speeds vary, depending on geography, network load, and other factors, so I just decided to use the speed I normally get on my mobile LTE connection (not Wi-Fi) at my office. With just two of five dots’ worth of signal strength, I enjoy a connection of 33 megabits per second (Mbps). That kind of wireless bandwidth might have cost something like $3.3 million back in 1991.

    Adding it up, we get $5.76 + $3.6 + $3.3 = $12.66 million to produce today’s iPhone back in 1991. And that’s just for the three components that are easiest to measure and compare across time. This estimate doesn’t include the camera, display, random access memory (RAM), MEMS gyroscope and accelerometer, or any of the other amazing parts and features packed into an impossibly compact package. Nor does this account for inflation, which means our comparison may understate the effect.

    These are fairly rough estimates. Yet it’s interesting that the new $12-million figure is four times the $3-million estimate from three years ago – which just happens to be the pace of Moore’s law, a doubling every 18 months or so. By many accounts, Moore’s law is slowing down or is even “dead.” Yet these types of cost-performance improvements suggest Moore’s law, at least for now, lives on.

    Reprinted from American Enterprise Institute.


    Bret Swanson

    Bret Swanson is a visiting fellow at AEI’s Center for Internet, Communications, and Technology Policy and president of Entropy Economics LLC, a strategic research firm specializing in technology, innovation, and the global economy.

    This article was originally published on FEE.org. Read the original article.



  • Facial Recognition, Differential Privacy, and Trade-Offs in Apple’s Latest OS Releases

    Many users rely on cloud-based machine learning and data collection for everything from tagging photos of friends online to remembering shopping preferences. Although this can be useful and convenient, it can also be a user privacy disaster. With new machine learning features in its latest phone and desktop operating system releases, Apple is exploring ways to provide these kinds of services and collect related user data with more regard for privacy. Two of these features—on-device facial recognition and differential privacy—deserve a closer look from a privacy perspective. While we applaud these steps, it’s hard to know how effective they are without more information from Apple about their implementation and methods.

    Facial recognition and machine learning

    Let’s start with the new object and facial recognition feature for the Photos app. The machine learning processing necessary for an app like Photos to recognize faces in pictures is usually run in the cloud, exposing identifiable user data to security threats. Instead, Apple has bucked this industry trend and opted to develop a system that runs in the background on your phone, tablet, or laptop only, without you having to upload your photos to the cloud. Keeping user data on the device like this—rather than sending it off to Apple’s servers or other third parties—is often better for user privacy and security.

    The choice to run machine learning models like facial recognition on a device rather than in the cloud involves some trade-offs. When deployed this way, Apple loses speed, power, and instant access to mountains of user data for its facial recognition machine learning model. On the other hand, users gain something much more important: privacy and control over their information. Running these services on the device rather than in the cloud gives users a higher degree of privacy, especially in terms of law enforcement access to their data.

    While cloud is often the default for large-scale data processing, Apple has shown that it doesn’t have to be. With these trade-offs in mind, Apple has rightly recognized that privacy is too great a price to pay when working with data as sensitive and identifiable as users’ private photos. Running a machine learning model on the device is not a privacy guarantee—but at the very least, it’s a valuable effort to offer technically sophisticated facial recognition functionality to users without requiring all of them to hand over their photos.

    Differential privacy

    The second noteworthy feature of Apple’s latest release is a model called differential privacy. In general, differential privacy is a process for making large datasets both as accurate and as anonymous as possible. It’s important to note that Apple is not the first large-scale data operation to take on differential privacy: Microsoft researchers pioneered the field, Google employs anonymized data collection algorithms, and the Census Bureau released a differentially private dataset. Collectively, these initiatives show the way forward for other parts of the tech industry: when user data needs to be collected, there are often cleverer, safer, more privacy-respecting ways to do it.

    In this case, Apple is trying to ensure that queries on its database of user data don’t leak too much information about any individuals. The best way to do that is to not have a database full of private information—which is where differential privacy comes in. Differential privacy helps companies like Apple learn as much as possible about their users in general without revealing identifiable information about any individual user in particular. Differentially private datasets and analysis can, for example, answer questions about what kinds of people like certain products, what topic is most popular in a news cycle, or how an application tends to break.

    Apple has released few details about its specific approach to differential privacy. It has publicly mentioned statistics and computer science methods like hashing (transforming data into a unique string of random characters), subsampling (using only a portion of all the data), and noise injection (systematically adding random data to obscure individuals’ information). But until Apple provides more information about its process (which it may do in a white paper, as in the past), we are left guessing as to exactly how and at what point in data collection and analysis such methods are applied.

    Just as on-device machine learning has trade-offs, so too does differential privacy. Differential privacy relies on the concept of a privacy budget: essentially, the idea you can only make so much use of your data without compromising its privacy-preserving properties. This is a tricky balancing act between accuracy and anonymity. The parameters and inputs of a given privacy budget can describe how information is being collected, how it is being processed, and what the privacy guarantees are.

    With the new release, Apple is employing differential privacy methods when collecting usage data on typing, emoji, and searching in an attempt to provide better predictive suggestions. To date, differential privacy has had much more academic attention than practical application, so it’s interesting and important to see major technology companies applying it—even if that application has both good and bad potential consequences.

    On the good side, Apple has apparently put some work into collecting user data with regard for privacy. What’s more, even the use of differential privacy methods on user data is opt-in, a step we’re very glad to see Apple take.

    However, Apple is collecting more data than it ever has before. Differential privacy is still a new, fairly experimental pursuit, and Apple is putting it to the test against millions of users’ private data. And without any transparency into the methods employed, the public and the research community have no way to verify the implementation—which, just like any other initial release, is very likely to have flaws. Although differential privacy is meant to mathematically safeguard against such flaws in theory, the details of such a large roll-out can blow away those guarantees. Apple’s developer materials indicate that it’s well aware of these requirements—but with Apple both building and utilizing its datasets without any oversight, we have to rely on it to self-police.

    In the cases of both facial recognition and differential privacy, Apple deserves credit for implementing technology with user privacy in mind. But to truly advance the cause of privacy-enhancing technologies, Apple should release more details about its methods to allow other technologists, researchers, and companies to learn from it and move toward even more effective on-device machine learning and differential privacy.

    Source: Facial Recognition, Differential Privacy, and Trade-Offs in Apple’s Latest OS Releases | Electronic Frontier Foundation



  • Analog: The Last Defense Against DRM

    With the recent iPhone 7 announcement, Apple confirmed what had already been widely speculated: that the new smartphone won’t have a traditional, analog headphone jack. Instead, the only ways to connect the phone to an external headset or speaker will be via Bluetooth, through the phone’s AirPlay feature, or through Apple’s proprietary Lightning port.

    Apple’s motivations for abandoning the analog jack are opaque, but likely benign. Apple is obsessed with simple, clean design, and this move lets the company remove one more piece of clutter from the phone’s body. The decision may also have been a part of the move to a water-resistant iPhone. And certainly, many people choose a wireless listening experience.

    But removing the port will change how a substantial portion of iPhone owners listen to audio content—namely, by simply plugging in a set of headphones. By switching from an analog signal to a digital one, Apple has potentially given itself more control than ever over what people can do with music or other audio content on an iPhone. We hope that Apple isn’t unwittingly opening the door to new pressures to take advantage of that power.

    When you plug an audio cable into a smartphone, it just works. It doesn’t matter whether the headphones were made by the same manufacturer as the phone. It doesn’t even matter what you’re trying to do with the audio signal—it works whether the cable is going into a speaker, a mixing board, or a recording device.

    The Lightning port works differently. Manufacturers must apply and pay a licensing fee to create a Lightning-compatible device. When rumors were circulating about an iPhone 7 with no headphone jack, our colleague Cory Doctorow predicted that big content companies would try to take advantage of that control: “Right now, an insistence on DRM would simply invite the people who wanted to bypass it for legal reasons to use that 3.5mm headphone jack to get at it. Once that jack is gone, there’s no legal way to get around the DRM.”

    In other words, if it’s impossible to connect a speaker or other audio device to an iPhone without Apple software governing it, then major media companies might pressure Apple to place limits on how Apple’s customers can use their content. Because U.S. law protects digital rights management (DRM) technologies, it may be illegal to circumvent any potential restrictions, even if you’re doing it for completely lawful purposes. There would certainly be a precedent: big content companies infamously pressured Apple to incorporate DRM in its iTunes service.

    iTunes DRM is a thing of the past now—and fortunately, most DRM for audio downloads has gone with it. But some major media companies are still eager to find ways to control how we use their content. In the current debate over the FCC’s proposal to unlock TV set-top boxes, TV and film producers have insisted that they should be able to decide which devices can receive video. Can we believe the content industry will leave audio alone if outputs become entirely digital?

    The good news is that the new iPhone will come with a Lightning dongle that will provide a standard 3.5 mm analog port. What’s not clear is whether iOS or specific apps will be able to disable the dongle—if so, history suggests that Hollywood and other major media industries will be eager to take advantage of that capability. It’s also unclear whether the iPhone’s software will be able to disable access to the 3.5 mm port for other third-party devices that use it, such as credit card terminals or blood pressure readers.

    To its credit, Apple has been adamant it won’t use the new design to restrict your listening experience. But therein lies the problem: you shouldn’t have to depend on a manufacturer’s permission to use its hardware however you like (or, for that matter, to build your own peripherals and accessories for it). What you can do with your hardware should be determined by the limits of the technology itself, not its manufacturers’ policy decisions.

    Ultimately, this story isn’t about Apple, or any other company’s design decisions. It’s about theDigital Millennium Copyright Act’s protection for DRM. Section 1201 of the DMCA makes it illegal to bypass DRM or give others the means of doing so. 1201 gives technology manufacturers the power to cast clouds of legal uncertainty over common uses of their products. It gives content owners and other powerful entities an unfair weapon against innovation by others. It’s a law that needs fixing.

    Source: Analog: The Last Defense Against DRM | Electronic Frontier Foundation