$5B for Wall is Chump Change Compared to Torpedoes in Productivity

Think what you will about a wall, a fence, or whatever you might call it. I’m an American citizen, and I’ve just finished my dinner of wonderful Mexican tortas ahogadas while south of that line on a map.

The shutdown’s fallout goes far beyond a wall or $5 billion dollars to pay for it.

Spending by the US federal government hovers around $4 trillion dollars each year.  (Check out projections by the Congressional Budget Office here.)

With federal spending estimated at $4.7 trillion for 2019, $5 billion is chump change at about 0.1%.

During a shutdown, federal employees like the somewhat disgruntled yet dutiful TSA workers I met on my way to Mexico earlier today suffer demotivating lack of pay. Surely some of these workers will, or already are, finding non-government employment.

In the competitive labor market we live in, the best and most capable government employees are going to look elsewhere. Meanwhile, I doubt ‘essential’ workers are operating at any semblance of top productivity given uncertainty and growing resentment. Not to mention … don’t we have a constitutional amendment outlawing this? Oh it’s a confusing world.

So, doing some back-of-the-envelope math, the government has stewarded over more or less $420 billion in outlays during 33 days while firing on less than all cylinders.

Meanwhile, the government is almost surely going to issue backpay at some point. So, everybody’s going to get paid pretty much the same amount for working at diminished productivity or not at all.

Similarly, employee retention will likely take a hit. If so, we’ll see money diverted away from accomplishing the business of government back into hiring, training and the necessary acclimation time it takes new workers to learn and become adept at their roles. Retention issues mean there will be gaps in which some tasks won’t get done efficiently, correctly or on time. Work will be duplicated or redone as well.

Taxpayers, is this really how you want the spending of your money managed?

Now, we see all kinds of posturing on both sides. Trump is afraid of backlash if he breaks a campaign promise — not that I think the good folks here in Mexico are in any hurry to pay for his wall.

Meanwhile, opposition leadership seems all too content to let the games go on as we lose untold billions in lost productivity and operational inefficiencies. When better border security becomes cheaper than the shutdown’s productivity losses, the debate’s premises do change. Is better border security wrong when it’s cheaper, or is there too much moral hazard in such a rationalization?

Wouldn’t it make sense for Democrats to just give Trump his wall and stop wasting money? Wouldn’t it make sense for Trump to say, “let’s table this a while”, so that the business of government can go on with less interruption?

To add to the fog, we see Trump pushing for ’emergency action’ that might further weaken separation of powers in America.

Both sides find themselves too committed. Too much political capital is at stake. Both sides look a bit like kindergarteners, and the American people are losing.

Math Pierces Steve Jobs’ Reality Distortion Field After 35 Years

In 2011, a hero to so many of us including myself was in the last stages of pancreatic cancer.  Steve Jobs, an icon of the PC revolution, and later what should rightfully be called “the mobile revolution”, was working hand in hand with author Walter Isaacson. The pair were completing the last biography he would take part in while alive.

The book, simply called Steve Jobs, was published 19 days after Jobs’ death and quickly rose to bestseller status. The book contains many memorable tales about the way he motivated others. In one often referenced experience, we read about Jobs driving home the point that making computers faster can save entire human lifetimes.

Now, 35 years since related by Steve Jobs in August 1983, we can plainly see that the math behind his dramatic example was wrong … but maybe that’s okay.

Jobs was not an engineer. He was also not the original inventor of many ideas which made Apple’s products a game changer. Like many successful visionaries, his value to the market — and to the world — was his ability to help others connect the dots.

In leading Apple, he created narratives that captured imaginations and provided motivation. His famous ‘reality distortion field’ worked because he showed his employees things they wanted to believe. He packaged goals as a better reality than the one his audience existed in moments before.

People forgot about preconceived limits. As perceptions and desires outstripped current reality, his engineers and product designers were doggedly motivated to close the gap —  by improving actual reality. 

In the final biography, Isaacson relates the particular moment which inspired many including the team at my company Pocketmath. Jobs had always championed the quality of the user’s experience, and one day he set his sights on the onerously slow boot time common to computers of the era.

According to Isaacson, the Apple CEO explained to engineer Larry Kenyon what dramatic impact 10 seconds of boot time savings would have:

“Jobs went to a whiteboard and showed that if there were five million people using the Mac, and it took ten seconds extra to turn it on every day, that added up to three hundred million or so hours per year … [sic] … equivalent of at least one hundred lifetimes saved per year.”

The story sounds great, and we have no reason to doubt Jobs really delivered such a message. Furthermore, it likely did have a profound effect. Isaacson later writes that Kenyon produced boot times a full 28 seconds faster.

However, the math doesn’t add up. Based on the assumptions outlined, it saves far less than 100 lifetimes.

In the 2005 book Revolution in the Valley: The Insanely Great Story of How the Mac Was Made, author Andy Hertzfeld quotes Steve Jobs somewhat differently:

“Well, let’s say you can shave 10 seconds off the boot time. Multiply that by 5 million users and that’s 50 million seconds every single day. Over a year, that’s probably dozens of lifetimes. Just think about it. If you could make it boot 10 seconds faster, you’ll save a dozen lives. That’s really worth it, don’t you think?”

Let’s walk through the math.

Assume, just as Jobs did, that there would be 5 million people using the Mac. Likewise, assume the time savings is 10 seconds each day per person. Each day, 50 million seconds are saved. In a year, that translates to a savings of 50 million multiplied by 365 days which is 18.25 billion seconds.

Let’s translate that savings into hours. There are 60 seconds in a minute and 60 minutes in an hour.  So, there are 60 times 60 which is 3,600 seconds in an hour. To obtain the number of hours saved, divide 18.25 billion seconds by 3,600 seconds.  The result is 5,069,444 hours of savings.

image1

Now, let’s compute a typical life span of 75 years in hours.  We calculate 365 days per year times 24 hours per day times 75 years.  The result is 657,000 hours.

image2

So, how many lifetimes have we saved?  We divide the total savings 5,069,444 hours by our assumed typical lifespan of 657,000 hours.  We have saved nearly 8 lifetimes.

image3

Of course, it’s unlikely Jobs was trying to make his point in anything but rough numbers, but his 100 lifetimes or even dozens is dramatically off — by about an order of magnitude.

Did Jobs goof on the math? Was he misquoted?

Maybe it doesn’t matter. Jobs made a point that some relatively small amount of engineering effort would save lifetimes of time, and he was right.

What’s more, the power of the lesson — even if flawed — paid bigger dividends. Apple not long ago became the world’s first trillion dollar company, and the company sold 3.7 million Mac computers in Q3 of this year alone. With many Macs remaining in service for several years, there are quite likely a lot more than 5 million people using a Mac at the very moment you read this.

The story itself has been quoted profusely since appearing in Hertzfeld and Isaacson’s works. Are we going to lambast the story as “fake news”, or are we going to say that everyone makes mistakes but the basic idea was right?  One thing is possibly true: if anybody checked the math, they didn’t talk about it.

For what it’s worth, I’ll hazard a guess where the original math went wrong. Suppose we convert 18.25 billion seconds saved into hours by dividing by 60 seconds in the minute and then another 60 minutes in the hour.  Now, suppose while scribbling on the whiteboard Jobs only divided by 60 one time. We get 304,166,167 — about 300 million, the number quoted in the book. That would be correct if were were talking about the number of minutes, not hours.

Yet again, we’ve seen the reality distortion field in full effect. During 13 years since publication, the story has been read by millions.  It has been enthusiastically retold without question in Isaacson’s biography, well known periodicals including the Harvard Business Review and throughout the web and social media.

It’s worthwhile to note that Apple’s operating system has changed drastically since the 1980’s. Apple’s OS X desktop operating system launched in 2001 replaced core components with those borrowed from BSD, an operating system of UNIX roots. As such, it’s likely the optimizations built in 1983 at Jobs’ urging have long been outmoded or removed entirely.

However, the operating system has continued to evolve upon its past and take inspiration from those stories. More broadly, Apple as a business would not be where it is today had it not had some degree of early success. Each product in the technology world builds on lessons and customer traction gained from previous versions.

One can argue not all technological progress benefits people, but I find it difficult to say there haven’t been some home runs for the better.

While reality may have been distorted longer than many would have dreamed, reality also caught up. As Apple made its reality catch up, its products have contributed lifetimes to the human experience.  And … maybe, just maybe … there was some “fake it ‘til you make it.”

Opinion: Petition w/ 3.9M+ signatures to overturn election is a road that undermines democracy

What We Know: Petition to overturn the election, 3.9 million signatures

This time, I’m posting mostly opinions, but here are the key facts I’ll be commenting on:

On Change.org there is a petition out to overturn the election in what would be Clinton’s favor.  It’s been signed by over 3.9 million people at the time of this posting:

Electoral College: Make Hillary Clinton President on December 19

Opinion: Reasonable sentiment, wrong solution

Many Americans have long argued that the Electoral College is a broken system.  Many have called for reforms or its abolishment.  Reform isn’t a crazy idea.

Hillary Clinton did win the popular vote, and many are saying that in a democracy this should be enough to make her president.  At first blush, this too seems quite rational.

I voted for neither major party candidate.  So, why would I oppose flipping the vote?

I am against so-called “faithless electors” voting contrary to how they were bound in the election.

To me, this is at best a dereliction of duty by electors to represent the people and the states.  Furthermore, I believe we would see serious (hopefully unintended) consequences.

Yes, Hillary Clinton would be president. Then the other half of the country would feel that the rules and laws everyone thought they were playing by had been trampled.

If you think there’s anger today, imagine the level of anger we would see from tens of millions of people who would then feel betrayed.

The very foundations of the Constitution and our government as a whole would be called into question, and respect for law and order would devolve.

We need to keep the peace, and we need to work on ways to improve our institutions of government.

We are already seeing some glimmers of hope that Donald Trump understands compromise and will work toward it.  All presidents break campaign promises.  Meanwhile, let’s not forget that the Republican party, while in power, is far from unified.

I do believe at this juncture that America’s system of checks and balances can and will survive almost any of Trump’s detractors worst-case scenarios.

That’s precisely why we need to maintain the integrity of our government.  The Electoral College, for better or worse, is part of that foundation today.

Let’s preserve as much of the integrity of our system as we can to protect all of us.

Is the Electoral College Totally Wrong?

I do support some aspects of the Electoral College that favor concepts like states rights.  The more local control we have, in many ways the more free people are.  Your vote “counts more” when more power is at state or local level.  How many decisions do you want made by someone across the country?

To maintain this sort of state and local autonomy, each state no matter how small needs to have a certain voice and power.

Our Founding Fathers were not perfect people.  Yes, some did own slaves.  The rights of native peoples were utterly trampled.  They were probably pretty hypocritical at times.  But … they also gave us a system that’s worked pretty well the last couple of centuries.  They did more good than harm for our nation as it is today.

So, let’s give the tried and true ideas the respect they deserve.  I think a read of things like The Federalist Papers (I read them in highschool) would remind us how much effort they put into finding the right balances.

Respect the rights of the majority AND the minority.  We should always strive to evolve systems to do that better.

From Austin, Texas, I bid you peace.

Why I’m removing the “Fake Protests” Twitter post

UPDATE:  Yup, I’m pulling it.  Details below.

Dear Twitterverse, Girls and Boys, Republicans, Democrats, Libertarians, Peoples of the Green Party and More,

Yes, I got it wrong.

While there’s no such thing as absolute certainty, I now believe that the busses that I photographed on Wednesday, November 9, were for the Tableau Conference 2016 and had no relation to the ongoing protests against President Elect Trump.

This information was provided to me from multiple professional journalists, and I do still have some faith in humanity.  🙂

protest-false-2016-11-11

Right Context, Wrong Facts?

I remain with a skeptical eye on just how much manipulation has occurred behind the scenes of many political events, but I do believe that these specific busses were used for a technology conference — nothing more.

I don’t know that Donald Trump was talking about me (posted 24 hours after my post), but he’s among many with doubts:

So, Why Remove?

I initially believed it was in the best interest of everyone to keep the Tweet live while augmenting the story.  I will indeed post a screenshot for posterity.

The realities of Twitter mean that many people see the Tweet without seeing my follow-ups and corrections.  Therefore, if I continue to show this Tweet on Twitter, then people will often see the retweets of the original Tweet without corrections alongside.  They will not know that the Tweet is incorrect.

As I have said before, I value the truth.  I will remove the Tweet so more people can have a higher proportion of truth in their lives.  I also want us all to refrain from repeating information that is likely untrue so that we can have greater credibility when our evidence is stronger.  (Less “boy who cried wolf”)

Why Not Remove?

There are some risks in removing this Tweet.

They include:

  • It sends the impression censorship has occurred — something I’m against
  • Some people will believe I was pressured to remove the Tweet or did so purely out of self-interest
  • It may reduce the dialog among all of us

Let’s not be afraid to say things when we aren’t completely sure, but let’s provide the right qualifiers and probabilities when we can.

Rapid discourse won’t always be fact-checked.  If it had to be, much of it simply wouldn’t occur.  That could be as bad as occasionally getting it wrong.  It’s not journalism, and it’s not to be held to the same bar.  (But this only works when everyone understands the “bar” is lower and exercises skepticism — another longer conversation for another time.)

Many Thanks!

I appreciate the conversations I’ve had with many in the “Twitterverse”.

I have tried to be magnanimous to many including those who disagree with me.  I value our ability to discuss with each other in a civil and respectful tone regardless of where our views may stand.  I’ve seen a good amount of that in these past few days, and I’d love to see more.

An Apology and a Promise

I would like to express my sincere apologies to anyone who feels misled.  I can assure you my intentions have always been for the best.  Now that I know just how wide these things can go so quickly, I’ll be more careful to give you the right information should there be a next time.

To the Future!

I am not a professional blogger nor a professional journalist. I do hope to find more ways to make a difference.  Being involved in political discourse is vital to democracy.

All the best,

Eric  🙂

Original Tweet here:

https://twitter.com/erictucker/status/796543689237692416

My “Fake Protest” Claims and America’s Angry Division

On Wednesday a few minutes after 5pm, upon leaving a meeting near downtown Austin, I chanced on a large group of busses parked just east of the I-35 on 5th Street.  I snapped a few pictures and was on my way.

Later that day, I noticed news reports of protests in downtown and near the University of Texas campus.  Having dealt with closed streets and unusual traffic patterns that day toward the south of the downtown (below 8th street) and having seen some pictures of protests that looked more like the south end of downtown than near the capitol, I presumed the busses had something to do with the protests.

Casually, I texted a few friends and then made a Twitter post.  I post on Twitter just a few times a year, and until yesterday I had about 40 followers.

https://twitter.com/erictucker/status/796543689237692416

The response was massive (about 15,000 retweets in the first 36 hours), and even the local news commented:

Fox 7 News: Protests across US and Austin accused of being fake by some on social media

And for a fleeting moment the front page of Reddit I’m told as well:

https://m.reddit.com/r/The_Donald/comments/5cawfz/ok_thats_it_the_antitrump_protesters_in_austin/

Was I flat wrong?  Perhaps!

It turns out Tableau was having a massive conference having nothing to do with politics less than a mile away.  Could these have been busses for Tableau’s shenanigans?  I hope they don’t mind me linking to the schedule from that same day:

Tableau 2016 Conference Schedule

And so, I posted this:

Does Anyone Care if I was right or wrong?  Sadly, not enough.

In the 3 hours since posted, my alternative (and possibly true) view of reality has garnered a whopping 8 retweets and 11 likes.

What’s going on?  The systems that carry information to us all are filtered by what’s sensational — not by what’s true.

To be cynical for a minute, people are surprisingly uninterested in truth but very interested in what helps them to make their own case.  This is probably human nature, but is it healthy?  Is that really the basis and process by which we want to make big decisions about our future based upon?

A Few Words from the Middle of the Road

First of all, I voted for Gary Johnson.  I’m neither a supporter of Trump or Hillary, but I did consider voting for both of them for different reasons.

I’m secular independent who leans Republican.  I often describe my political views as “little ‘l’ libertarian with a heart.”

A few of my key political positions:

  • Reduce taxes on both individuals and businesses
  • Encourage the repatriation of wealth
  • Pro business
  • Freedom of religion
  • Pro gay marriage
  • Cover pre-existing health conditions for every American
  • Pro gun
  • Increase the availability of visas for foreigners while reducing illegal immigration

As you can see, I don’t fully align with any candidate.  Let’s promote new voices so that we can have the dialog necessary to reach real and lasting solutions.

Parting Words

I want to set an example.  I can be wrong, and I can admit it when I am.  I will strive for the truth, and I ask you to do the same.

Let’s respect and defend the rights that make America … America!  Let’s respect each other, and let’s give each side a chance to be heard.

Whether we like what’s going on in the streets today or not, remember those people have a voice.  I do not want to live in an America where people cannot make their views heard.

I ask everyone to do their best to tone down some of the anger and find compromises if not collaborations that can move us toward a better America.

 

 

 

A NAS in every household will help you and archaeologists. Do it now!

Our lives are digital.  Our cameras are no longer film.  Our notes are no longer postcards.  The USPS is having a hard time staying in business.

To get really deep about this … Thousands of years from now, archaelogists will see our world vividly just like on the day your iPhone or DSLR captured it. That is … if the data’s still around.

We’re losing data left and right because we aren’t practicing good ways of storing it.

Stop spreading your digital existence across 12 devices (including the ones long retired but never copied data from in the attic/garage/dumpster/Goodwill). Keep a definitive copy of everything in one place.

It’d be a shame cave paintings outlive our digital pictures, and right now that’s scarily possible.

If we could just centralize and manage it better, then maybe we could also have an easier time archiving it all.

So, let’s get practical!

First off … problems … how data was stored in the dark ages:

  • Cloud services.  They keep things accessible, can help centralize and they’re often inexpensive.  Cloud services miss the boat on your precious pictures and home movies because:
    • Your internet is too slow, and while Google et al are working on this, it’ll be a while yet.
    • Easy to user cloud storage providers are charging too much.
    • Inexpensive cloud storage providers are usually too hard to use.
  • The hard drive inside your computer can die at any time, and it’s probably not big enough.  Plus, it’s harder (not impossible) to share that stuff with say … your smart TV … and the rest of your family.
  • Portable/external hard drives.  Don’t get me started.  No.  I own far too many, and I have no clue what’s on most of them.  Plus 1/3 of them are broken — in some cases with precious photos or bits of source code lost forever.

Solution:  Get a Network Attached Storage device.  Today.  Without delay.

Why?  If you can centralize everything, it’s easier to back up.  You also have super fast access to it, and everybody in your home can share (or not — they do have access control features).

I have serious love for Synology‘s devices for three reasons:

  1. They integrate with Amazon’s Glacier service.  To me, this is a killer feature.  Now I can store every single one of my selfies, vacation pictures, inappropriate home movies, etc. in a very safe place until my credit card stops working.  At $10 per terabyte per month, that credit card should work a while.  Glacier is a good deal.
  2. It’s seriously awesome, fully featured software.
  3. Quality, fast hardware.

All at a price that while not the cheapest doesn’t particularly break the bank.

Now, I’ll assume that if you’re anything like me you want speed.  You want access to your data, or you’re not going to use that NAS like it’s supposed to be.

You’re also not going to invest in a 24 drive SSD enterprise SSD NAS because … well … you’re a home user.

So, some guidelines:

  • Buy at least twice as much storage as you think you need.  Your estimate is low.
  • Plan to upgrade/replace in 3 years.  You don’t have to make a perfect buying decision — nor do you have to buy for eternity.  Plan to MIGRATE! — which is why you’ll want hardware that’s fast enough you can copy data off it before the earth crashes into the sun!
  • Don’t plan to add more hard drives anytime soon.  Fill all the drive bays.
  • Buy the largest available drives.
  • Forget SSD.  SSD is too small and far too expensive for the storage you want.  Buy more drives and get performance advantages of having more drives instead.
  • Plan on backing up every computer you own to the NAS — size appropriately — and then some.

My Picks

With price and performance in mind, I’ll wade through the mess of models Synology has to tell you what makes sense in my opinion:

Recommendation 1:  Synology DS414

  • Four drives provide 16TB physical space — 10-12TB usable with Synology’s own RAID.
  • Four drives provide better read performance than two or one
  • Spare fan just in case one fails
  • Link aggregation, but you’ll never use it.

Recommendation 2:  Synology DS214+

  • Fastest Synology two drive model.
  • Two drive redundancy.
  • For some users, the video playback features of the DS214play may be more appropriate, but it’s slower and more expensive.

Recommendation 3:  Synology DS114

  • Danger!  Just one drive — no redundancy.  You are backing up with Glacier, right?
  • Fast for a single drive NAS

All provide:

  • USB 3.0 port(s) to load your data from a portable drive
  • Gigabit ethernet
  • All that lovely Synology software!

Hard drives?

Personally, I’d buy the Western Digital Red 5400RPM NAS drives in 4TB.  Based on Amazon’s pricing, I don’t see much of a premium if any for getting the largest model on the market.  The larger the drives, the more benefit you get from your NAS, so I wouldn’t skimp.

If you really truly believe you won’t need the space, but you’d like the performance of four drives on the DS414, then you can save around 350 USD by purchasing 4x 2TB drives instead of 4x 4TB.

Your Network Needs Speed

Now, along with all that firepower in the NAS, you need the network to feed that speed addiction.

Get a good quality switch, and if you’re going to use your NAS over wireless check out Amped Wireless RTA 15.  Wired speeds will nearly always be faster, but I like wireless convenience just like you.

You’ll Love Speedy Backups

For extra credit, Apple’s Time Machine backup works really nicely with my NAS.  It works a lot faster when I plug in the ethernet cable.  On a Cisco 2960G switch (yes, I have some serious commercial grade switches lying around), my late model Apple MacBook Pro Retina did around 100 gigs under 15 minutes.

Do I need a NAS in the future?

Possibly not.  When bandwidth gets there and cloud offerings match up at the right price points.

Oh, and a little re-arrangement of the letters NAS … NSA.  User trust!  Yes, all this assumes user trust of cloud services.  Then again, the NSA can probably backdoor your NAS if they really want to.  Sorry.  Nothing’s perfect.

Happy Trails

Your mileage my vary.  My new DS414 was a religious experience.

Why Amazon’s EC2 Outage Should Not Have Mattered

This past week I got a call in the middle of the night from my team that a major web site we operate had gone down. The reason: Amazon’s EC2 service was having issues.

This is the outage that famously interrupted access to web sites ordinarily visited by millions of people, knocked Reddit alternately offline or into an emergency read-only mode for about a day (or more?) and made mention in the Wall Street Journal, MSNBC and other major news outlets.

In the Northern Virginia region where the outage occurred and where we were hosted, Amazon divides the EC2 service into four availability zones. We were unlucky enough to have the most recent copies of crucial data in exactly the wrong availability zone, and this made nearly impossible an immediate graceful fail-over to another zone because the data was not retrievable at the time. Furthermore, we were unable to immediately transition to another region because our AMI’s (Amazon Machine Images) were stuck in the crippled Northern Virginia region and we lacked pre-arranged procedures to migrate services.

While in the works, we had not yet established procedures to migrate to another region. Having some faith in Amazon’s engineering team, we decided to stand pat. Our belief was that by the time we took mitigating measures, Amazon’s services would be back to life anyways. And … that proved to be true to the extent that we needed.

The lessons learned are this:
(1) Replicate your data across multiple Amazon regions
(2) Do 1 with your machine images and configuration
(3) For extra safety, do 1 and 2 with another cloud provider as well
(4) It’s probably a good idea to also do an off-cloud backup

Had we already done just 1 and 2, our downtime would have been measured in minutes, not hours as one of our SA’s flipped a few switches… all WHILE STAYING on Amazon systems. Notice how Amazon’s shopping site never seemed to go down? I suspect they do this.

As for the coverage stating that Amazon is down for a third day and horribly crippled, I can tell you that we are operating around the present issues, are still on Amazon infrastructure and are not significantly impacted at this time. Had we completed implementation of our contingency plans only within Amazon by the time this happened, things would have barely skipped a beat.

So, take the hype about the “Great Amazon Crash of 2011” with a grain of salt. The real lesson is that in today’s cloud contingency planning still counts. Amazon resources providing alternatives in California, Ireland, Tokyo and Singapore have hummed along without a hiccup throughout this time.

If Amazon would make it easier to move or replicate things among regions, this would make implementation of our contingency plans easier. If cloud providers in general could make portability among each other a point and click affair, that would be even better.

Other services such as Amazon’s RDS (Relational Database Service) and Beanstalk rely on EC2 as a sub-component. As such, they were impacted as well. The core issue at Amazon appears to have involved the storage component upon which EC2 increasingly relies upon: EBS. Ultimately, a series of related failures and overload of remaining online systems caused instability across many components within the same data center.

Moving into the future, I would like to see a world where Amazon moves resources automagically across data centers and replicates in multiple regions seamlessly. Also, I question the nature of the storage systems behind the scenes that power things like EBS, and until I have more information it is difficult to comment on their robustness.

Both users and providers of clouds should take steps to get away from reliance on a single data center. Initially, the burden by necessity falls on the cloud’s customers. Over time, providers should develop ways such that global distribution and redundancy happen more seamlessly.

Going higher level, components must be designed to operate as autonomously as possible. If a system goes down in New York City, and a system in London relies upon that system, then London may go down as well. Therefore, a burden also exists to design software and/or infrastructure that carefully take into account all failure or degradation scenarios.

Ruby Developers: Manage a Multi-Gem Project with RuntimeGemIncluder (Experimental Release)

A couple of years ago in the dark ages of Ruby, one created one Gem at a time, hopefully unit tested it and perhaps integrated it into a project.

Every minute change in a Gem could mean painstaking work often doing various builds, includes and/or install steps over and over.  No more!

I created this simple Gem (a Gem itself!) that at run-time builds and installs all Gems in paths matching patterns defined by you.

I invite brave souls to try it out this EXPERIMENTAL release now pending a more thoroughly tested/mature release. Install RuntimeGemIncluder, define some simple configuration in your environment.rb or a similar place and use require as you normally would:

Here’s an example I used to include everything in my NetBeans workspace with JRuby.

Download the Gem from http://rubyforge.org/frs/?group_id=9252

To install, go to the directory where you have downloaded the Gem and type:

gem install runtime-gem-includer-0.0.1.gem

(Soon you may be able to install directly from RubyForge by simply typing ‘gem install runtime-gem-includer‘.)

Some place before you load the rest of your project (like environment.rb if you’re using Rails) insert the following code:

trace_flag = "--trace"
$runtime_gem_includer_config =
{
:gem_build_cmd = "\"#{ENV['JRUBY_HOME']}/bin/jruby\" -S rake #{trace_flag} gem",
:gem_install_cmd = "\"#{ENV['JRUBY_HOME']}/bin/jruby\" -S gem install",
:gem_uninstall_cmd = "\"#{ENV['JRUBY_HOME']}/bin/jruby\" -S gem uninstall",
:gem_clean_cmd = "\"#{ENV['JRUBY_HOME']}/bin/jruby\" -S rake clean",
:force_rebuild = false,
:gem_source_path_patterns = [ "/home/erictucker/NetBeansProjects/*" ],
:gem_source_path_exclusion_patterns = []
}
require 'runtime_gem_includer'

If you are using JRuby and would like to just use the defaults, the following code should be sufficient:


$runtime_gem_includer_config =
{
:gem_source_path_patterns = [ "/home/erictucker/NetBeansProjects/*" ],
:gem_source_path_exclusion_patterns = []
}
require 'runtime_gem_includer'

Now simply in any source file as you normally would:

require 'my_gem_name'

And you’re off to the races!

Gems are dynamically built and installed at runtime (accomplished by overriding Kernel::require).  Edit everywhere, click run, watch the magic! There may be some applications for this Gem in continuous integration. Rebuilds and reloads of specified Gems should occur during application startup/initialization once per instance/run of your application.

Interested in source, documentation, etc.? http://rtgemincl.rubyforge.org/

More Efficient Software = Less Energy Consumption: Green Computing isn’t just Hardware and Virtualization

Originally published 16 November 2009.

Green is a great buzzword, but the real-world driver for many “green” efforts is cost. Data center power is expensive. Years ago, Oracle moved a major data center from California to my town Austin, Texas. A key reason: more predictably priced, cheaper power in Texas vs. California. What if Oracle could make the data center half the size and take half the power because its software ran more efficiently?

Your bank, your brokerage, Google, Yahoo, Facebook, Amazon, countless e-commerce sites and more often require surprisingly many servers.  Servers have traditionally been power-hungry things favoring reliability and redundancy over cost and power utilization.  As we do more on the web, servers do more behind the scenes.  The amount of computing power or various subsystem capabilities required varies drastically based on how an application works.

These days, hardware vendors across the IT gamut try to claim their data center and server solutions are more power efficient. The big push for consolidation and server virtualization (the practice by which one physical server functions as several virtual servers which share the hardware of the physical machine) does make some real sense.  In addition to using less power, such approaches often simplify deployment, integration, management and administration. It’s usually easier to manage fewer boxes than more, and the interchangeability facilitated by things like virtualization combined with good planning make solutions more flexible and able to more effectively scale on demand.

Ironically, the issue people seem to pay the least attention to is perhaps the most crucial: the efficiency of software.  Software orchestrates everything computers do.  The more computer processors, memory, hard drives and networks do, the more power they need and the bigger or more plentiful they must be. One needs more servers or more power burning servers the more operations those servers must perform.  The software is in charge.  When it comes to operations the computer performs, the software is both the CEO and the mid-level tactical managers that can make all the difference in the world.  If software can be architected, coded or compiled to be manage more efficiently the operations per unit of work produced goes down.  Every operation saved means power saved.

Computers typically perform a lot of overly redundant or otherwise unneeded operations. For example, a lot of data is passed across the network not because it absolutely needs to be, but because it’s easier for a developer to build an app that operates that way or the application to be implemented that way in production. There are applications that use central databases for caches when a local in-memory cache would not only be orders of magnitude faster but also burn less power. Each time data goes across a network it must be processed on each end and often formatted and reformatted multiple times.

A typical web service call (REST, SOAP, etc) – the so-called holy grail of interoperability, modularity and inter-system communication in some communities – is a wonderful enabler, but it does involve parsing (e.g. turning text data into things the computer understands), marshalling (a process by which data is transformed typically to facilitate transport or storage) and often many layers of function calls, security checks and other things.  The use of web services is not inherently evil, but far more carbon gets burned to make a web service call to a server across the country or even inches away than it is for the computer to talk to its own memory.  It’s also a lot slower.

Don’t get me wrong, I’m a big believer in the “army of ants” approach. However, I see the next big things in power utilization being software driven. We’re going to reach a point where we’ve consolidated all we reasonably can, and at that point it’s going to be a focus on making the software more efficient.

If my code runs in a Hadoop-like (Hadoop is open source software that facilitates computing across many computers) cluster and the framework has tremendous overhead compared to what I’m processing, how much smaller could I make the cluster if I could remove that overhead? What if I process more things at once in the same place? What if I batch them more? What if I can reduce remote calls? What if I explore new languages like Go with multi-core paradigms?  What about widely deployed operating systems like Linux, Windows and MacOS become more power efficient.  What about widely used apps consuming less power hungry memory?  What about security software taking fewer overhead CPU cycles?  Can we use multi-core processing more efficiently?

In most cases, performance boosts and power savings go hand-in-hand.  Oriented toward developers, here are a few more obvious areas for improvement.  Most are pre-existing good software design practices:

– Caching is the first obvious place:  (1) more caching of information, (2) less reprocessing of information, (3) more granular caching to facilitate caching where it was not previously done.

– Data locality:  Do processing as close to where data resides as possible to reduce transportation costs.  Distance is often best measured not in physical distance but in the number of subsystems (both hardware and software) that data must flow through.

– Limit redundant requests:  Once you have something retrieved or cached locally, use it intelligently:  (1) collect changes locally and commit them to a central remote location such as a database only as often as you need to, (2) use algorithms that can account for changes without synchronizing as often with data on other servers.

– Maximize use of what you have:  A system is burning power if it’s just on.  Use the system fully without being wasteful:  (1) careful use of non-blocking (things that move on instead of having the computer wait for a response from a component) operations in ways that let the computer do other things while it’s waiting;  (2) optimize the running and synchronization of multiple processes to balance use, process duration and inter-process communication such that the most work gets done with least waiting or overhead.

– Choose the language, platform and level of optimization based on amount of overall resources consumed:  Use higher performance languages or components and more optimizations for sections which account for the most resource utilization (execution time, memory use, etc.).  Conversely, use easier to build or cheaper components that account for less overall resource use so that more focus can go to critical sections.  (I do this in practice by mixing Ruby, Java and other languages inside the JRuby platform.)

In certain applications, maybe we don’t care about power utilization or much at all about efficiency, but as applications become increasingly large and execute across more servers development costs in some scenarios may become secondary to computing resources.  Some goals are simply not attainable unless an application makes efficient use of resources, and that focus on efficiency may pay unexpected dividends.

Developers especially of large-scale or widely deployed applications, if we want to be greener let’s focus on run-times, compilers and the new and the yet-to-be-developed paradigms for distributed massively multi-core computing.

There is a story that Steve Jobs once motivated Apple engineers to make a computer boot faster by explaining how many lifetimes of waiting such a boost might save.  Could the global impact of software design be more than we imagine?

[ Edit 14 October 2018: See also: Math Pierces Steve Jobs’ Reality Distortion Field after 35 Years. ]

Why Ruby on Rails + JRuby over PHP: My Take, Shorter Version

As a Ruby, Java and occasional C/C++ developer who has also written some production code in PHP, I work with and tend to prefer the power and flexibility provided by a JRuby + NetBeans + Glassfish stack over PHP.  Here is my attempt to somewhat briefly describe not only why but also to encourage others to develop in RoR vs. PHP:

Pros

–          Exceptionally high developer productivity with:

  • “Programming through configuration” philosophy
  • Emphasis on rather complete default behaviors
  • Write-once (DRY) orientation
  • Simple ORM (ActiveRecord) means a lot less SQL with minimal fuss
  • Dynamically typed language means a lot less thinking about variable declarations
  • Result:  A lot less grunt work; more focus on “real work”

–          Strongly encourages clean MVC architecture

–          Test frameworks

  • TestUnit is easy to use and effective
  • Enables test driven development (TDD) often omitted in PHP world
  • UI mocking frameworks are available

–          Pre-packaged database migrations feature eases schema creation and changes

  • Helper methods further simplify and aid to avoid writing SQL
  • Roll back or forward to arbitrary versions

–          Significant pre-packaged forms and JavaScript/AJAX UI support

–          Ruby language easy to learn and more versatile

  • Like PHP, Ruby language’s initial learning curve is much easier than Java, C#, etc.
  • Like PHP, Ruby language conducive to scripting as well as slightly better OOP support
  • Ruby language skills can be leveraged for use in environments outside web applications

–          Vendor support by Sun Micro

  • Dedicated team and significant JRuby project
  • Good support in NetBeans IDE
  • Quality Glassfish app server from JEE world
  • Provides integrated NetBeans, Glassfish, JRuby stack in one download

–          Tap JEE power from within Ruby

  • JRuby allows fairly seamless access to Java and JEE libraries and features as well as your own Java code should you desire
  • Result:  You can start simple without being boxed in, and you can later add a lot of enterprise-grade sophistication.

–          Community

  • Contains a lot of talent from JEE world
  • Libraries that implement simpler versions of enterprise-oriented features
  • Community tends to be rather friendly and inclusive

Cons

–          Maturity

  • Despite making huge strides, acceptance remains low at more conservative companies
  • Hosting options limited in comparison to PHP
    • Dedicated server or VPS
    • Amazon EC2
    • Smaller pool of shared hosts
  • The ORM can be a memory hog
  • Fewer jobs open due to fewer projects (job to applicant ratio might be greater though?)
  • Fewer sysadmins and established maintenance procedures
  • Less support, fewer developers to maintain RoR apps

–          LAMP-like scalability limitations for conventional architecture are comparable or more resource intensive than most PHP solutions

–          Of course, if venturing heavily into cross-platform JEE territory the learning curve steepens dramatically