Math Pierces Steve Jobs’ Reality Distortion Field After 35 Years

In 2011, a hero to so many of us including myself was in the last stages of pancreatic cancer.  Steve Jobs, an icon of the PC revolution, and later what should rightfully be called “the mobile revolution”, was working hand in hand with author Walter Isaacson. The pair were completing the last biography he would take part in while alive.

The book, simply called Steve Jobs, was published 19 days after Jobs’ death and quickly rose to bestseller status. The book contains many memorable tales about the way he motivated others. In one often referenced experience, we read about Jobs driving home the point that making computers faster can save entire human lifetimes.

Now, 35 years since related by Steve Jobs in August 1983, we can plainly see that the math behind his dramatic example was wrong … but maybe that’s okay.

Jobs was not an engineer. He was also not the original inventor of many ideas which made Apple’s products a game changer. Like many successful visionaries, his value to the market — and to the world — was his ability to help others connect the dots.

In leading Apple, he created narratives that captured imaginations and provided motivation. His famous ‘reality distortion field’ worked because he showed his employees things they wanted to believe. He packaged goals as a better reality than the one his audience existed in moments before.

People forgot about preconceived limits. As perceptions and desires outstripped current reality, his engineers and product designers were doggedly motivated to close the gap —  by improving actual reality. 

In the final biography, Isaacson relates the particular moment which inspired many including the team at my company Pocketmath. Jobs had always championed the quality of the user’s experience, and one day he set his sights on the onerously slow boot time common to computers of the era.

According to Isaacson, the Apple CEO explained to engineer Larry Kenyon what dramatic impact 10 seconds of boot time savings would have:

“Jobs went to a whiteboard and showed that if there were five million people using the Mac, and it took ten seconds extra to turn it on every day, that added up to three hundred million or so hours per year … [sic] … equivalent of at least one hundred lifetimes saved per year.”

The story sounds great, and we have no reason to doubt Jobs really delivered such a message. Furthermore, it likely did have a profound effect. Isaacson later writes that Kenyon produced boot times a full 28 seconds faster.

However, the math doesn’t add up. Based on the assumptions outlined, it saves far less than 100 lifetimes.

In the 2005 book Revolution in the Valley: The Insanely Great Story of How the Mac Was Made, author Andy Hertzfeld quotes Steve Jobs somewhat differently:

“Well, let’s say you can shave 10 seconds off the boot time. Multiply that by 5 million users and that’s 50 million seconds every single day. Over a year, that’s probably dozens of lifetimes. Just think about it. If you could make it boot 10 seconds faster, you’ll save a dozen lives. That’s really worth it, don’t you think?”

Let’s walk through the math.

Assume, just as Jobs did, that there would be 5 million people using the Mac. Likewise, assume the time savings is 10 seconds each day per person. Each day, 50 million seconds are saved. In a year, that translates to a savings of 50 million multiplied by 365 days which is 18.25 billion seconds.

Let’s translate that savings into hours. There are 60 seconds in a minute and 60 minutes in an hour.  So, there are 60 times 60 which is 3,600 seconds in an hour. To obtain the number of hours saved, divide 18.25 billion seconds by 3,600 seconds.  The result is 5,069,444 hours of savings.

image1

Now, let’s compute a typical life span of 75 years in hours.  We calculate 365 days per year times 24 hours per day times 75 years.  The result is 657,000 hours.

image2

So, how many lifetimes have we saved?  We divide the total savings 5,069,444 hours by our assumed typical lifespan of 657,000 hours.  We have saved nearly 8 lifetimes.

image3

Of course, it’s unlikely Jobs was trying to make his point in anything but rough numbers, but his 100 lifetimes or even dozens is dramatically off — by about an order of magnitude.

Did Jobs goof on the math? Was he misquoted?

Maybe it doesn’t matter. Jobs made a point that some relatively small amount of engineering effort would save lifetimes of time, and he was right.

What’s more, the power of the lesson — even if flawed — paid bigger dividends. Apple not long ago became the world’s first trillion dollar company, and the company sold 3.7 million Mac computers in Q3 of this year alone. With many Macs remaining in service for several years, there are quite likely a lot more than 5 million people using a Mac at the very moment you read this.

The story itself has been quoted profusely since appearing in Hertzfeld and Isaacson’s works. Are we going to lambast the story as “fake news”, or are we going to say that everyone makes mistakes but the basic idea was right?  One thing is possibly true: if anybody checked the math, they didn’t talk about it.

For what it’s worth, I’ll hazard a guess where the original math went wrong. Suppose we convert 18.25 billion seconds saved into hours by dividing by 60 seconds in the minute and then another 60 minutes in the hour.  Now, suppose while scribbling on the whiteboard Jobs only divided by 60 one time. We get 304,166,167 — about 300 million, the number quoted in the book. That would be correct if were were talking about the number of minutes, not hours.

Yet again, we’ve seen the reality distortion field in full effect. During 13 years since publication, the story has been read by millions.  It has been enthusiastically retold without question in Isaacson’s biography, well known periodicals including the Harvard Business Review and throughout the web and social media.

It’s worthwhile to note that Apple’s operating system has changed drastically since the 1980’s. Apple’s OS X desktop operating system launched in 2001 replaced core components with those borrowed from BSD, an operating system of UNIX roots. As such, it’s likely the optimizations built in 1983 at Jobs’ urging have long been outmoded or removed entirely.

However, the operating system has continued to evolve upon its past and take inspiration from those stories. More broadly, Apple as a business would not be where it is today had it not had some degree of early success. Each product in the technology world builds on lessons and customer traction gained from previous versions.

One can argue not all technological progress benefits people, but I find it difficult to say there haven’t been some home runs for the better.

While reality may have been distorted longer than many would have dreamed, reality also caught up. As Apple made its reality catch up, its products have contributed lifetimes to the human experience.  And … maybe, just maybe … there was some “fake it ‘til you make it.”

More Efficient Software = Less Energy Consumption: Green Computing isn’t just Hardware and Virtualization

Originally published 16 November 2009.

Green is a great buzzword, but the real-world driver for many “green” efforts is cost. Data center power is expensive. Years ago, Oracle moved a major data center from California to my town Austin, Texas. A key reason: more predictably priced, cheaper power in Texas vs. California. What if Oracle could make the data center half the size and take half the power because its software ran more efficiently?

Your bank, your brokerage, Google, Yahoo, Facebook, Amazon, countless e-commerce sites and more often require surprisingly many servers.  Servers have traditionally been power-hungry things favoring reliability and redundancy over cost and power utilization.  As we do more on the web, servers do more behind the scenes.  The amount of computing power or various subsystem capabilities required varies drastically based on how an application works.

These days, hardware vendors across the IT gamut try to claim their data center and server solutions are more power efficient. The big push for consolidation and server virtualization (the practice by which one physical server functions as several virtual servers which share the hardware of the physical machine) does make some real sense.  In addition to using less power, such approaches often simplify deployment, integration, management and administration. It’s usually easier to manage fewer boxes than more, and the interchangeability facilitated by things like virtualization combined with good planning make solutions more flexible and able to more effectively scale on demand.

Ironically, the issue people seem to pay the least attention to is perhaps the most crucial: the efficiency of software.  Software orchestrates everything computers do.  The more computer processors, memory, hard drives and networks do, the more power they need and the bigger or more plentiful they must be. One needs more servers or more power burning servers the more operations those servers must perform.  The software is in charge.  When it comes to operations the computer performs, the software is both the CEO and the mid-level tactical managers that can make all the difference in the world.  If software can be architected, coded or compiled to be manage more efficiently the operations per unit of work produced goes down.  Every operation saved means power saved.

Computers typically perform a lot of overly redundant or otherwise unneeded operations. For example, a lot of data is passed across the network not because it absolutely needs to be, but because it’s easier for a developer to build an app that operates that way or the application to be implemented that way in production. There are applications that use central databases for caches when a local in-memory cache would not only be orders of magnitude faster but also burn less power. Each time data goes across a network it must be processed on each end and often formatted and reformatted multiple times.

A typical web service call (REST, SOAP, etc) – the so-called holy grail of interoperability, modularity and inter-system communication in some communities – is a wonderful enabler, but it does involve parsing (e.g. turning text data into things the computer understands), marshalling (a process by which data is transformed typically to facilitate transport or storage) and often many layers of function calls, security checks and other things.  The use of web services is not inherently evil, but far more carbon gets burned to make a web service call to a server across the country or even inches away than it is for the computer to talk to its own memory.  It’s also a lot slower.

Don’t get me wrong, I’m a big believer in the “army of ants” approach. However, I see the next big things in power utilization being software driven. We’re going to reach a point where we’ve consolidated all we reasonably can, and at that point it’s going to be a focus on making the software more efficient.

If my code runs in a Hadoop-like (Hadoop is open source software that facilitates computing across many computers) cluster and the framework has tremendous overhead compared to what I’m processing, how much smaller could I make the cluster if I could remove that overhead? What if I process more things at once in the same place? What if I batch them more? What if I can reduce remote calls? What if I explore new languages like Go with multi-core paradigms?  What about widely deployed operating systems like Linux, Windows and MacOS become more power efficient.  What about widely used apps consuming less power hungry memory?  What about security software taking fewer overhead CPU cycles?  Can we use multi-core processing more efficiently?

In most cases, performance boosts and power savings go hand-in-hand.  Oriented toward developers, here are a few more obvious areas for improvement.  Most are pre-existing good software design practices:

– Caching is the first obvious place:  (1) more caching of information, (2) less reprocessing of information, (3) more granular caching to facilitate caching where it was not previously done.

– Data locality:  Do processing as close to where data resides as possible to reduce transportation costs.  Distance is often best measured not in physical distance but in the number of subsystems (both hardware and software) that data must flow through.

– Limit redundant requests:  Once you have something retrieved or cached locally, use it intelligently:  (1) collect changes locally and commit them to a central remote location such as a database only as often as you need to, (2) use algorithms that can account for changes without synchronizing as often with data on other servers.

– Maximize use of what you have:  A system is burning power if it’s just on.  Use the system fully without being wasteful:  (1) careful use of non-blocking (things that move on instead of having the computer wait for a response from a component) operations in ways that let the computer do other things while it’s waiting;  (2) optimize the running and synchronization of multiple processes to balance use, process duration and inter-process communication such that the most work gets done with least waiting or overhead.

– Choose the language, platform and level of optimization based on amount of overall resources consumed:  Use higher performance languages or components and more optimizations for sections which account for the most resource utilization (execution time, memory use, etc.).  Conversely, use easier to build or cheaper components that account for less overall resource use so that more focus can go to critical sections.  (I do this in practice by mixing Ruby, Java and other languages inside the JRuby platform.)

In certain applications, maybe we don’t care about power utilization or much at all about efficiency, but as applications become increasingly large and execute across more servers development costs in some scenarios may become secondary to computing resources.  Some goals are simply not attainable unless an application makes efficient use of resources, and that focus on efficiency may pay unexpected dividends.

Developers especially of large-scale or widely deployed applications, if we want to be greener let’s focus on run-times, compilers and the new and the yet-to-be-developed paradigms for distributed massively multi-core computing.

There is a story that Steve Jobs once motivated Apple engineers to make a computer boot faster by explaining how many lifetimes of waiting such a boost might save.  Could the global impact of software design be more than we imagine?

[ Edit 14 October 2018: See also: Math Pierces Steve Jobs’ Reality Distortion Field after 35 Years. ]

Associated Press going too far?

Recently the widely used wire service has started charging for excerpts using more than 4 words. Under fair use rules and established copyright law authors of editorials and reviews are ordinarily permitted to excerpt from a work without permission from the copyright holder as long as the source is properly identified.

In a NY Times article dated July 23, 2009:

“In an interview, [AP’s chief executive Tom Curley] specifically cited references that include a headline and a link to an article, a standard practice of search engines like Google, Bing and Yahoo, news aggregators and blogs.” —Richard Perez-Pena, NY Times

AP has been on a road to protect its content and revenue stream for a while taking what some see as overreaching measures. Last year, the Drudge Report’s liberal opposite Drudge Retort was at the center of a publicized dispute (about the case) over content posted by its contributors. AP’s legal eagles have been reportedly going after a number of sites and bloggers, and a form of automated DRM (AP’s press release on the subject) is also being rolled out.

In the last couple of days, many are pointing to James Grimmelmann’s experiment where he was charged to license words clearly in the public domain (Grimmelmann’s blog post). Mr. Grimmelmann was refunded his money and the license rescinded having no implications on his use of them given that the words – a quote by Thomas Jefferson – are already in public domain. AP likened the experiment to running an item you’ve already paid for through a self-serve grocery store checkout station.

The iCopyright system which Grimmelmann used to license the content runs on the honor system, and according to others licensees can copy paste text into it that they intend to use. The system is relatively simple and apparently at least in this case did not check the public domain status of the content. Given that we are dealing with a system that already provides the user some discretion about what’s entered, is the Grimmelmann story a minor procedural flaw and not really a big deal?

As AP points out, reporters should be paid for their work. Why should bloggers – especially those few who turn a profit – be freeloading on a hardworking industry that requires real people and real infrastructure on the ground asking questions and going places. Furthermore, in today’s era of sound bytes, sometimes just a few words tell the story.

Traditional media’s revenue sources and business models are on shaky ground with print – a big part of AP’s customer base – in the most precarious position. When content can be syndicated, rehashed or even originated by so many people, will centralized sources large enough to generate real advertising and subscriber revenue survive? Can AP, Reuters, CNN or any other large privately funded news organization survive in the coming climate?

Some say what’s most alarming about the situation is the notion that fair use is per policy being thrown out altogether which could set a dangerous precedent.

Innovation Process: Limitations of Schemas

Once upon a time, I took a college class on interpersonal communications. We discussed schemas upon which the brain operates. Interestingly, in marketing – the subject designed among other things to manipulate or aid in the manipulation of the human psyche for increased profit – we discussed schemas upon which the brain operates.

Then, in a class on neural networks we discussed why brains both organic and artificial tend to remember the first and last things they learned about a specific topic. Furthermore, we talked about how schemas within these brains operate.

Speaking to a technical crowd: SQL operates upon very rigidly defined schemas. Ordinarily, we have tables with columns defining things like people’s names and addresses and telephone numbers and dates of birth and gender and what have you.

Schemas are wonderfully robotic – if by robotic you mean those old conceptions of robots from 1950’s sci-fi. Simplistic notions of schemas tend to dictate that we approach the world very deterministically, very discretely (and I don’t mean privately) and logically. I say, wrong!

Schemas mean patterns. We and most organisms with neurons learn by association. We start with some hard wired axioms and go from there. Break the pattern, and things become difficult to understand. While most “out of the box” thinking is I might argue pretty boxed in, the theoretical ideal of “out of the box” operation is to go beyond the schemas. Is this possible? I don’t know. But maybe we can combine schemas.

Most attempts at productivity are based on refining operations into consistent, easy to follow schemas. In software design, we use design patterns to enforce models that we can wrap our brains around – or at least – having spent much time banging our heads against walls now have a particular schema thoroughly beaten in … and might as well recycle.

Consistent, reusable schemas are absolutely wonderful for Model T’s, Model F’s and many things that churn down an assembly line. Plenty of simple database-driven software can be built perfectly well with a lot of recycled thought.

Now, there is an antiquated saying in research with words to the effect: Before wasting your time going down a road much travelled to re-invent the wheel, the donut, what have you … see if somebody else has done it first and better. If you’ve got something on a shelf, pull it off and use it. Great. This works 99% of the time when you’re not producing new schemas. There’s a ton of value in evolutionary steps and applying something from one schema into another.

However, once in a while we want to do something revolutionary. We don’t start from zero. We are surrounded by many good schemas; old solutions to old problems should often prevail. Then there comes a time when we must come up with a schema we believe to be genuinely new. New? Is there such a thing as a new schema? I have no idea. I would venture to say there likely is not; all schemas are combinations of others in some way; everything is based upon association of one form or another. I don’t care. I’ll leave this subtle point for the philosophers.

To me, I care about not being constrained by old schemas. The less I know sometimes the better. The less structure I have sometimes the better. I want to look at my problem, flail about, come up with a half-baked solution and then plug the holes with somebody’s tried and true schema.

If I’m operating under tremendous structure, I can’t do this. The wonder of iterative design is in some sense a means to apply my very semi-structured process. Iterative improvement allows one to drift about for a solution, come up with something new and then not waste too much time dawdling on unnecessary details.

That’s my 3.5 + rand( rand(34) )^rand(2/rand(5)) cents. Ironically, this article itself is bound by structure. Go figure.

Metered Broadband? It’s Not Particularly New or Totally Evil – A Brief Introduction to Commercial Bandwidth Services Pricing

Many consumers are in arms over announcements by several providers that they will begin charging overage rates or limiting data transferred. In fact, much of the hosting industry and higher end commercial solutions provide Internet connectivity on a basis of (1) the physical line and (2) the amount of bandwidth actually used.

For example, a provider might charge $20 a month for a network connection that might have a capacity of a 100 megabits or 1000 megabits per second. The provider might then charge a separate fee depending on how much of that connection is used. Bandwidth is often metered on a megabit per second (8 megabits in a megabyte) or based on the total amount transferred often measured in gigabytes.

When bandwidth is sold on a megabit per second (often abbreviated “mbps”) utilization rate, it is often metered by reading actual bandwidth flowing through the connection every so many minutes. In industry standard 95th percentile billing, the highest 5% of those readings are thrown out. The customer is then billed based on the “sustained 95th percentile”.

Under 95th percentile assuming a monthly billing cycle, the customer could in principle use much more bandwidth than usual for up to about 36 hours and would not be billed for the increased amount. So, the 5% in 95th percentile lets customers retain some flexibility for less frequent “bursts”.

Per “bucket” or “data transferred” billing is just so much money per gigabyte (or other amount).

Customers typically pay for:
(1) the line –
Physical line or uplink to provider.
(2) commit –
The amount of bandwidth for which the customer agrees typically over some contract term to purchase. This bandwidth is sold at a “commit rate” which is often less expensive than the overage rate.
(3) overage –
The amount of bandwidth over the commit rate. Overage bandwidth is sold at an “overage rate” which is often double or so the commit rate.

Higher overage vs. commit rates encourage customers to take on larger commits ensuring ISPs can better plan their infrastructure. Higher overage rates also account for the inherently often higher cost and over provisioning necessary to provide services when demand is less predictable.

A service provider that has unpredictable bandwidth utilization must choose among (1) over provisioning infrastructure and charging more money for services or (2) providing a lower quality of service particularly at peak times and likely cutting corners elsewhere.

A service provider that has many customers all paying the same rate but using very different amounts of bandwidth must (1) charge all customers higher rates or (2) deliver a lower overall quality of service to all customers.

Your power is metered. Your cell phone is metered. You pay for the gasoline you burn in a car. You choose whether to buy expensive or inexpensive products. You choose the nature and quality of what you consume often based on what you’re willing to pay.

In spite of some of the uproar, I believe that charging for or even capping bandwidth based on usage is in fact fair. Implemented properly, such efforts could result in a higher quality of service for all consumers.

The key issue should be whether prices charged for overage and larger commits are fair.

Unapologetically Embracing the Term: Artificial Intelligence

In a college course on neural networks, a professor once described to the class how the reputation of artificial intelligence had taken a nose dive in the 1980’s. A divided community and its pundits had built up a perception that C-3PO-like robots and talking, thinking computers were not far off. AI’s visionaries over promised and under delivered.

To this very day, entrepreneurs hesitate to utter the words “artificial intelligence” for fear of losing credibility. Various systems are often called by more specific names whether it be “Bayesian classifier”, “prediction system”, “search engine”, “knowledge base”, etc. These terms all have various meanings known well to the AI community, but we dare not lump them together and utter the words “artificial intelligence.”

There are plenty who would say I am bastardizing terminology. Artificial intelligence’s very definition is gray. Is a car engine that employs a neural network to manage a fuel air mixture actually intelligent? Is Google intelligent? At what point is information retrieval AI? Is a spell checker AI? As many others have said before me, I take the viewpoint that AI (or oftentimes things that apply AI) is a continuum without clearly defined boundaries.

Rather than trying to carefully classify certain algorithms, I devise solutions that make use of various methods that might be borrowed from an AI textbook, might arise from mathematics or that simply come from my own ideas. If the approach is particularly probabilistic without adhering to well defined mathematics or relies on certain kinds of innovations employing non-deterministic or difficult to predict behavior, I tend to call it AI.

During the course of applying or developing AI, I rarely use such words as “artificial” or “intelligent”. Afterall, to me I’m just building a program in a way that makes sense to me.

The most difficult to solve problems in practical applications tend to be those with many possible answers or no exact answers. We run into cases where we cannot build a computer program to solve the problem with a reasonable amount of time and computing resources. Other times, given infinite time and resources the problem is still unsolvable. In computer science, these are often problems said to have “non-polynomial” solutions. For such problems, we can not solve them at all or we must devise a solution that provides an approximate answer.

Approximate answers to hard problems very often involve smart solutions — artificially intelligent solutions. Much of AI is about reducing a problem to what matters most and then pumping out a best guess … just like real human beings semi-solving real problems.

As we approach more human or more intelligent approaches, I’m unafraid to call these solutions “artificial intelligence”.

With vast amounts of computing power and more creative approaches to problems, I believe our constraints to building pretty good solutions are more and more just the limitations of our own minds. And even there, plenty of AI algorithms do things their own creators (including myself) don’t fully comprehend.

I don’t think about “am I solving a problem logically and intelligently” so much as I try to approach all problems logically and intelligently. But if you ask whether I’m building AI … most of the time in these situations my answer will be “Yes, in which shade of gray?”