Politcs 2012

That the Internet has changed everything is a truism. Some stunning – and sometimes alarming - take-aways from “Politics, Tech & Decision 2012,” the most recent Gotham Media Ventures panel discussion, bring this home with a bang.

  • Online presence is everything. The first hire in today’s political campaigns is the website team, not the campaign manager.
  •  

  • Privacy has become a quaint illusion. Database techniques now make it possible to serve and measure advertising and other messages to increasingly specific market segments. The next step in targeting will be knowing what you think right now. Target is already trying to understand and sell to pregnant women specifically in the 3rd trimester. Credit cards companies can predict divorce rates two years before they happen with 95 percent accuracy. They see changes in consumer patterns.
  •  

  • “We all live in a yellow submarine.” Personalization and digital targeting surround each of us in a membrane of filters so that there is less and less discursive conversation. We each tend to be always talking with like-minded individuals and to be less and less exposed to opposing views.
  •  

  • More money buys more influence than ever before. Super PACS in 2012 are dirtier and more powerful than ever before. They are funded by huge donations - $20 million – from really big donors. They often support shadow campaigns of tweets and viral videos that are user generated but paid for by PACs.
  •  

  • Polls no longer speak truth. Online polls are now skewed because of technological flaws. They’re all misleading. We each live in our own echo chamber (see the bullet before last). Automated polling has under 10 percent response; it’s illegal to call cell phones for research and 35 percent of people have no land lines.
  •  

  • One thing hasn’t changed. TV is still the most effective election all for all demographics, while social media are the most persuasive tools on issues.
  •  

  • Speed counts. The Internet has increased volatility to an unbelievable extent. Being nimble has become more important than planning. How can you anticipate a potential crisis? How can you respond in Internet time?
  •  

  • Two kinds of power. It’s become a bimodal world, where you either have to have the big donors locked up or have huge online broad-based support from celebrities and/or grassroots. That at least provides a ray of hope for the masses!

To give credit where credit is due: Richard Hofstetter, partner, Frankfurt Kurnit Klein & Selz was moderator. Panelists were: Michael Bassik, managing director and US digital practice chair, Burson Marsteller; Taegan Goddard, founder and publisher, Political Wire; Eason Jordan, former chief news executive, CNN founder and CEO, Poll Position, and Eli Pariser, board president and former executive director, Moveon.org. Frankfurt Kurnit hosted the event

The Strategic Importance of Technical Computing Software

Beyond sticking processors together, Sticky Technical Computing and Cloud Software can help organizations unlock greater business value through automated integration of Technical Computing assets – Systems and Applications Software.

Most mornings when I am in Connecticut and the weather is tolerable, I usually go for a jog or walk in my neighborhood park in the Connecticut Sticks. One recent crisp sunny fall morning, as I was making my usual rounds, I got an email alert indicating that IBM had closed its acquisition of Algorithmics – a Financial Risk Analysis Software Company and this would be integrated into the Business Analytics division of IBM. This along with a recent (at that time) announcement of IBM’s planned acquisition of Platform Computing (www.ibm.com/deepcomputing) sparked a train of thoughts that stuck with me through the holidays and through my to-and-fro travel of over 15,000 miles to India and back in January 2012. Today is February 25, 2012 – another fine day in Connecticut and I just want to finish a gentle jog of three miles but made a personal commitment that I would finish and post this blog today. So here it is before I go away to the Sticks!

Those of you who have followed High Performance Computing (HPC) and Technical Computing through the past few decades as I have may appreciate these ruminations more. But these are not solely HPC thoughts. They are, I believe, indicators of where value is migrating throughout the IT industry and how solution providers must position themselves to maximize their value capture.

Summarizing Personal Observations on Technical Computing Trends in the last Three Decades – The Applications View

 
 
My first exposure to HPC /Technical Computing was as a Mechanical Engineering senior at the Indian Institute of Technology, Madras in 1980-1981. All students were required to do a project in their last two semesters. The project could be done individually or in groups. Projects required either significant laboratory work (usually in groups) or significant theoretical/computational analysis (usually done individually). Never interested in laboratory work, I decided to work on a computational analysis project in alternate energy. Those were the days of the second major oil crisis. So this was a hot topic!

Simply put, the project was to model the flame propagation in a hybrid fuel (ethanol and gasoline) internal combustion engine using a simple one dimensional (radial) finite-difference model to study this chemically reacting flow over a range of concentration ratios (ethanol/gasoline: air) and determine the optimal concentration ratio to maximize engine efficiency . By using the computed flame velocity, it was possible to algebraically predict the engine efficiency under typical operating conditions. We used an IBM 370 system and those days (1980-1981) and these simulations would run in batch mode in the night using punched cards as input. It took an entire semester (about four months) to finish this highly manual computing task for several reasons:
 
 

  1. First, I could run only one job in the night; physically going to the computer center, punching the data deck and the associated job control statements and then looking at the printed output the following morning to see if the job ran to completion. This took many attempts as inadvertent input errors could not be detected till the next morning.
  2. Secondly, the computing resources and performance were severely limited. When the job actually began running, often it would not run to completion in the first attempt and would be held in quiescent (wait) mode as the system was processing other higher priority work. When the computing resources became available again, the quiescent job would be processed and this would continue multiple times until the simulation terminated normally. This back and forth often took several days.
  3. Then, we had to verify that the results made engineering sense. This was again a very cumbersome process as visualization tools were still in their infancy and so the entire process of interpreting the results was very manual and time consuming.
  4. Finally, to determine the optimal concentration ratio to maximize engine efficiency, it was necessary to repeat steps 1-3 over a range of concentration rations.

By that time, the semester ended, and I was ready to call it quits. But I still had to type the project report. That was another ordeal. We didn’t have sophisticated word processors that could type Greeks and equations, create tables, and embed graphs and figures. So this took more time and consumed about half my summer vacation before I graduated in time to receive my Bachelor’s degree. But in retrospect, this drudgery was well worth it.

It makes me constantly appreciate the significant strides made by the IT industry as a whole – dramatically improving the productivity of engineers, scientists, analysts, and other professionals. And innovations in software, particularly applications and middleware have had the most profound impact.

 
 
So where are we today in 2012? The fundamental equations of fluid dynamics are still the same but applications benefiting industry and mankind are wide and diverse (for those of you who are mathematically inclined, please see this excellent 1 hour video on the nature and value of computational fluid dynamics (CFD) - https://www.youtube.com/watch?v=LSxqpaCCPvY .

We also have yet another oil crisis looming ominously. There’s still an urgent business and societal need to explore the viability and efficiency of alternate fuels like ethanol. It’s still a fertile area for R&D. And much of this R&D entails solving the equations of multi-component chemically reacting, transient three dimensional fluid flows in complex geometries. This may sound insurmountably complex computationally.

But in reality, there have been many technical advances that have helped reduce some of the complexity.
 

  1. The continued exponential improvement in computer performance – at least a billion fold or more today over 1981 levels – enables timely calculation.
  2. Many computational fluid dynamics (CFD) techniques are sufficiently mature and in fact there are commercial applications such as ANSYS FLUENT that do an excellent job of modeling the complex physics and come with very sophisticated pre and post processing capabilities to improve the engineer’s productivity.
  3. These CFD applications can leverage today’s prevalent Technical Computing hardware architecture – clustered multicore systems – and scale very well.
  4. Finally, the emergence of centralized cloud computing (https://www.cabotpartners.com/Downloads/HPC_Cloud_Engineering_June_2011.pdf ) can dramatically improve the economics of computation and reduce entry barriers for small and medium businesses.

One Key Technical Computing Challenge in the Horizon

 
 
Today my undergraduate (1981) chemically reacting flow problem can be fully automated and run on a laptop in minutes – perhaps even an iPad. And this would produce a “good” concentration ratio. But a one-dimensional model may not truly reflect the actual operating conditions. For this we would need today’s CFD three dimensional transient capabilities that could run economically on a standard Technical Computing cluster and produce a more “realistic” result. With integrated pre and post processing, engineers’ productivity would be substantially enhanced. This is possible today.

But what if a company wants to concurrently run several of these simulations and perhaps share the results with a broader engineering team who may wish to couple this engine operating information to the drive-chain through the crank shaft using kinematics and then using computational structural dynamics and exterior vehicle aerodynamics model the automobile (Chassis, body, engine, etc.) as a complete system to predict system behavior under typical operating conditions? Let’s further assume that crashworthiness and occupant safety analyses are also required.

This system-wide engineering analysis is typically a collaborative and iterative process and requires the use of several applications that must be integrated in a workflow producing and sharing data. Much of this today is manual and is one of today’s major Technical Computing challenge not just in the manufacturing industry but across most industries that use Technical Computing and leverage data. This is where middleware will provide the “glue” and believe me it will stick if it works! And work it will! The Technical Computing provider ecosystem will head in this direction.

Circling Back to IBM’s Acquisition of Algorithmics and Platform Computing

 
 
With the recent Algorthmics and Platform acquisitions, IBM has recognized the strategic importance of software and middleware to increase revenues and margins in Technical Computing; not just for IBM but also for value added resellers worldwide who could develop higher margin services in implementation and customization based on these strategic software assets. IBM and its application software partners can give these channels a significant competitive advantage to expand reach and penetration with small and medium businesses that are increasingly using Technical Computing. When coupled with other middleware such as GPFS and Tivoli Storage Manager and with the anticipated growth of private clouds for Technical Computing, expect IBM’s ecosystem to enhance its value capture. And expect clients to achieve faster time to value!

No Apology for High Performance Computing (HPC)

A few months back, at one of my regular monthly CTO club gatherings here in Connecticut, an articulate speaker discussed the top three IT trends that are fundamentally poised to transform businesses and society at large. The speaker eloquently discussed the following three trends:
 

  • Big Data and Analytics
  • Cloud Computing
  • Mobile Computing

I do agree that these are indeed the top three IT trends in the near future – each at differing stages in adoption, maturity and growth. But these are not just independent trends. In fact, they are overlapping reinforcing trends in today’s interconnected world.

However, while discussing big data and analytics, the speaker made it a point to exclude HPC as an exotic niche area largely of interest to and (implying that it is) restricted to scientists and engineers and other “non-mainstream” analysts who demand “thousands” of processors for their esoteric work in such diverse fields as proteomics, weather/climate prediction, and other scientific endeavors. This immediately made me raise my hand and object to such ill-advised pigeon-holing of HPC practitioners – architects, designers, software engineers, mathematicians, scientists, and engineers.

I am guilty of being an HPC bigot. I think these practitioners are some of the most pioneering and innovative folk in the global IT community. I indicated to the speaker (and the audience) that because of the pioneering and path breaking pursuits of the HPC community who are constantly pushing the envelope in IT, the IT community at large has benefited from such mainstream (today) mega IT innovations including Open Source, Cluster/Grid computing, and in fact even the Internet. Many of today’s mainstream Internet technologies emanated from CERN and NCSA – both organizations that continue to push the envelope in HPC today. Even modern day data centers with large clusters and farms of x86 and other industry standard processors owe their meteoric rise to the tireless efforts of HPC practitioners. As early adopters, these HPC practitioners painstakingly devoted their collective energies to building, deploying, and using these early HPC cluster and parallel systems including servers, storage, networks, the software stack and applications – constantly improving their reliability and ease of use. In fact, these systems power most of today’s businesses and organizations globally whether in the cloud or in some secret basement. Big data analytics, cloud computing, and even mobile/social computing (FaceBook and Twitter have gigantic data centers) are trends that sit on top of the shoulders of the HPC community!

By IT standards, the HPC community is relatively small – about 15,000 or so practitioners attend the annual Supercomputing event. This year’s event is in Seattle and starts on November 12. But HPC practitioners have very broad shoulders and with very keen and incisive minds and a passionate demeanor not unlike pure mathematicians. Godfrey H. Hardy – a famous 20th century British mathematician – wrote the Mathematician’s Apology – defending the arcane and esoteric art and science of pure mathematics. But we as HPC practitioners need no such Apology! We refuse to be castigated as irrelevant to IT and big IT trends. We are proud to practice our art, science, and engineering. And we have the grit, muscle and determination to continue to ride in front of big IT trends!

I have rambled enough! I wanted to get this “off my chest” over these last few months. But with my dawn-to-dusk day job of thinking, analyzing, writing and creating content on big IT trends for my clients; and with my family and personal commitments, I have had little time till this afternoon. So I decided to blog before getting bogged down with yet another commitment. It’s therapeutic for me to blog about the importance and relevance of HPC for mainstream IT. I know I can write a tome on this subject. But lest my tome goes with me unwritten in a tomb, an unapologetic blog will do for now.

By the way, G. H. Hardy’s Apology – an all-time favorite tome of mine – is not really an apology. It’s one passionate story explaining what pure mathematicians do and why they do it. We need to write such a tome for HPC to educate the broader and vaster IT community. But for now this unapologetic blog will do. Enjoy. It’s dusk in Connecticut. The pen must come off the paper. Or should I say the finger off the keyboard? Adios.

The US Healthcare System – One Big Tax on the Economy – Beyond Costs and Operational Efficiencies – Innovation is Critical – Technology Helps.

It’s well known that the US Healthcare costs are skyrocketing. Estimates range from 15%-20% of US GDP – greater than any other developed nation in the world. Left unchecked, this will be a big burden that today largely falls on US employers and businesses. And these businesses have to pass on these costs to their customers, making them cost uncompetitive in an increasingly globalized world. I found the following recent articles very illuminating in describing the challenges in US Healthcare and the implications of globalization:

  1. The Big Idea: How to Solve the Cost Crisis in Health Care, Robert S. Kaplan and Michael E. Porter, Harvard Business Review, September 2011.
  2. The Risks and Reward of Health-Care Reform, Peter Orzag, Foreign Affairs, July/August 2011.
  3. How America Can Compete – Globalization and Unemployment, Michael Spence, Foreign Affairs, July/August 2011.

But the big question is what each of us can do individually, collectively in an organization, and in our ecosystem across organizations – nationally and globally.

On a recent weekend, on October 1, I attended a talk by Dr. Atul Gawande sponsored by the New Yorker magazine and IBM. This was preceded by an exclusive breakfast meeting with Atul. I was fortunate to be invited and I thank IBM for a very gracious invitation to this event hosted by Dr. Paul Grundy of IBM who is also President of Patient-Centered Primary Care Collaborative. At breakfast, I also got to spend some quality time with the publisher of the New Yorker and other doctors (all medical – not like the Poor Hungry Doctor (Ph. D.) kind, like yours truly!) who are all facing these challenges of the US Healthcare system.

During the breakfast event and the subsequent talk, much of the emphasis was on reducing costs and improving operational efficiencies in the US Healthcare system. Dr. Gawande was very effective in conveying his path breaking ideas on how checklists and coaching can greatly improve a surgeon’s performance and result in far better patient outcomes.

Dr. Gawande started with the premise that we all reach a plateau at one point or the other in our lives and careers. And as we push ourselves to become better at what we do, the marginal benefits of our efforts seem to be all for naught. So what can we do? How can we increase our operational efficiency? His recipe marries continuous learning with coaching.

I encourage everyone interested in this subject to read his recent article in the New Yorker and also his book on checklists. His book also covers other professions beyond surgeons including architects, athletes, etc. It stresses that in-addition to continuous learning throughout one’s life, a coach is an essential partner for continuous self-improvement in any profession particularly those that are knowledge based. This clearly includes mine – an Information Technology (IT) analyst and entrepreneur.

As IT professionals, our lives have become complex and is today’s harsh reality. We all have to do more with less as we all have less time and leaner budgets. And yet we also have to do more with more as we are drowned in data, interruptions, and regulations. This more or less is driving us nuts. Everything is escalating at a frantic rate and pace while margins continue to dwindle. We are constantly challenged to improve every day operationally in what we do.

Part of the problem is IT itself. IT in some ways has caused this problem and I think IT is also part of the solution. I constantly ask myself these reflective questions: Is speed a virtue? Is Big Data really that useful? Is constant improvement always better? I think the answer to these questions is the proverbial “Yes and No” which drives me further nuts. Being an engineer, I like the determinism of a precise unambiguous answer. I like the precision of checklists but clearly also appreciate the value of coaching! So it is Yes and No for me now on these philosophical issues.

While IT has made a very positive impact on improving the operational efficiencies of healthcare, also required are process innovations (some IT-enables and others require business incentives). In fact, in response to a question from the audience, Atul gave an example of how a surgeon in his hospital was able to take a standard but lower cost surgical gauze and then cut it so that it would be better fit for purpose or tuned to task rather than using the more expensive pre-cut gauze. This adjusted process was then adopted by several surgeons in the hospital resulting in substantial savings in operational costs while improving patient outcomes. This was clearly a business process innovation!

But IT must itself be tuned to task and fit for purpose. In short IT must become Smarter. It’s what IBM calls Smarter Computing. With Watson and other related Smart IBM efforts and with fostering collaboration the healthcare ecosystem (Dr. Grundy’s efforts), IBM is providing the incentive and impetus needed to help address the challenges with the US Healthcare system. With events such as the one on Oct/1, IBM and its partners are providing the mentoring and coaching for everyone touched by the healthcare system!

It Takes Two To Tango!

A classic mistake for start-ups is to ask one name to fly solo. It’s usually the product name. Since entrepreneurs have to be obsessed with their product to start a business, that’s probably to be expected. But then how can you be sure you’re talking about the company when it has the same name as the product? Which brand promise do you want to imply or express?

Apple Computer owes its name to a small apple farm where Steve Jobs spent time each year with friends in the mid-70s. And it was the name of both the product and the company until the Lisa, Mac and other new products came along. That’s the typical pattern for start-ups. It was also a long time ago in terms of today’s marketplace.

Today you need all the brand power you can get to claim and hold a place in customer minds, and you need this the minute you start marketing.

Products can be treated as brands - given proprietary names and a brand platform as the backbone of marketing communications efforts. Or they can be given descriptive names and associated with a brand. But they’re missing out if they are denied the halo effect of a corporate brand.

The corporate brand is the face of a business strategy – what the company wants to be known for. In time, it becomes the organizing principle that simplifies the complexity of multiple products and the umbrella that facilitates new product acceptance.

The cost need be no greater for two than for one if you do it right – and you’ll build a far stronger foundation for ongoing sales and profits with both.

Software Everyware – Hungry or Happy?

I recently attended the IBM Innovate conference as an IBM guest analyst. At the outset, I must thank IBM – especially their outstanding Software IT analyst team – for being an excellent host and providing us a forum to get a lot of valuable technical and business information on the IBM Rational portfolio of solutions targeted at developers and the IT community. The overarching theme was Software Everyware.

As I returned back toConnecticut, this theme got me thinking. Those of you, who know me, know that I am a foodie. Those that know me better also know that not only do I relish good food, but also like to sample and customize it and get it “at a whim” when traveling or on the road. My close friends and family often think that my whole world revolves around food! And travel itineraries are purposefully built for this!

So during one of the round tables when an IBM executive painted an analogy of integrated software solutions with being Hungry and wanting food, it resonated very well with me. The scenario he painted went as follows: Imagine you are driving and you want to stop to get some food. Then using Yelp on your smart phone, you get a list of nearby restaurants serving close to what you are yearning for and you read the reviews, etc. Then using Groupon, you can check if there are any coupons that could be used, then using the GPS you arrive at this restaurant and have a good meal that makes you Happy and sated! This is great but can be better!

Now, taking this further, he said, imagine if all of this was integrated, and all you do is press a Hungry button – similar to the Staples Easy button. And voila, you get all these processes and applications integrated and you arrive at the restaurant with less manual action on your part. Perhaps with a meal ordering system integrated, you could start munching your delicious meal as soon as you arrive at the restaurant – a classic Just in Time (JIT) system! This could make you even more Happy!

So Software Everyware allows you to collaborate, integrate, and innovate! And yes, become Happy faster while minimizing manual effort!

But innovation is not just about technology or products but rather about the careful design and optimization of the business with people, processes, policies, and partners with purpose, passion, persistence, and perspiration! This is what I witnessed at the IBM Rational Innovate conference. Beyond Software Everyware, it was also Happy Everyware!

I went in Hungry to learn and returned Happy! And I didn’t press any buttons! My world has become better!

OPEN VIRTUALIZATION ecosystem continues to gather momentum – New KVM Alliance

Today’s enterprise data center crisis is largely caused by the sprawl of under-utilized x86 systems, ever escalating electricity costs, and increasing staffing costs. Using virtualization to centralize and consolidate IT workloads, many organizations have significantly reduced their IT capital costs, reduced operational expenses, improved IT infrastructure availability, and achieved better performance and utilization.

Last month, Red Hat, Inc. (NYSE: RHT) and IBM (NYSE: IBM) announced that they are working together to make products and solutions based on KVM (Kernel-based Virtual Machine) technology the OPEN VIRTUALIZATION choice for the enterprise. Several successful deployment examples i.e. the IBM Research Computing Cloud RC2 and BNP Paribas were highlighted.

Subsequently, later in the month, BMC Software, Eucalyptus Systems, HP, IBM, Intel, Red Hat, Inc., and SUSE today announced the formation of the Open Virtualization Alliance, a consortium intended to accelerate the adoption of open virtualization technologies including KVM.

The benefits of KVM (https://www.cabotpartners.com/Downloads/IBM_Linux_KVM_Paper.pdf) include outstanding performance on industry standard benchmarks, excellent security and reliability, powerful memory management, and a very broad support for hardware devices including storage. Further, since KVM is part of Linux, clients can benefit from the numerous advantages of Linux including lower TCO, more versatility, and support for the widest range of architectures and hardware devices. Moreover, Linux performs, scales, is modular and energy-efficient, is easy-to-manage, and supports an extensive and growing ecosystem of ISV applications.

While we believe that OPEN VIRTUALIZATION holds great promise to address the crises in today’s centers and is a key enabling technology for clients contemplating a transition to cloud computing, its success – and those of the alliance members – will largely depend largely how this new alliance grows and how alliance members can:
 

  • Build a more complete and robust IT ecosystem that includes Independent Software Vendors (ISVs), Systems Integrators (SIs), and other data center/cloud solution providers.
  • Provide a MEASURED MIGRATION http://cabotdatacenters.wordpress.com/2011/05/27/measured-migration-is-smart-for-the-datacenter-and-clouds/ path to existing clients who have substantial IT investments on proprietary virtualization technologies.
  • Deliver differentiated offerings (systems, complementary software, and services) that best address the growing client workloads and data center crises now and in the future.

More proof points for further momentum of this alliance in the future would be the participation of a major ISV or SI as key driving members of this alliance and/or the adoption of OPEN VIRTUALIZATION for mission critical environments at banks or large scale government environments that demand bullet proof security and reliability. We think this will happen – sooner than later as the KVM alliance momentum builds!

In the end, the Open Source (VIRTUALIZATION included) movement has always been about providing clients the flexibility of choice, growth, and customization by avoiding the proprietary traps of vendor lock in; yet maintaining the most stringent enterprise grade requirements of security, reliability, and quality of service!

MEASURED MIGRATION is Smart for the Datacenter and Clouds

Imagine the solar energy needed to convert the earth’s water mass to clouds! Likewise, with legacy IT investments estimated to be in the trillions of dollars in an interconnected global IT environment, the sheer effort to migrate even a modest fraction of these environments to the cloud can be colossal.

Yet in the past few years the predominant debate in enterprise IT seems to be around the rate and pace of the transition to the cloud starting with the need to make the datacenter smart and agile.

While we believe that cloud computing will dramatically impact the way IT services are consumed and delivered in the future, we also believe that this transition must be thoughtful and measured. Companies must have a MEASURED MIGRATION trajectory that is staged to minimize risks and maximize returns. They must take great care to examine which business and IT processes can be migrated with an eye to optimizing their business metrics without assuming needless risk.

Numerous surveys suggest that cloud computing will be over a $100 billion opportunity by 2015 and a large fraction of IT solutions will be delivered over the cloud in the next few years. While we could debate on the precise estimates, we believe that:
 

  • The market opportunity is large with growth rates much faster than the overall IT industry,
  • Private and hybrid clouds will become the dominant cloud delivery models as enterprise workloads begin to leverage the promise of clouds and security concerns persist with public clouds,
  • Before making substantial new cloud investments, businesses will carefully examine the business case that will be primarily driven by their current and future workload needs, and lastly,
  • Customers will rely on cloud providers who have the deepest insights into their workloads and can deliver a broad portfolio of cloud software, services, and systems optimized to these workloads with a MEASURED MIGRATION strategy.

The winners, we believe, will be those IT solution providers who will not only have promising technology solutions in such cloud enabling technologies as virtualization, scalable file systems, end-to-end systems management, etc., but also have a strategic vision and execution path that facilitates this through MEASURED MIGRATION.

IBM in its Systems Software Division which is part of the Systems and Technology Group (STG) is one such large solution provider with an impressive array of over 16 cloud enabling IaaS technologies ranging from virtualization, systems management, to scalable file systems, to high availability and disaster recovery. But more importantly, in recent briefings, we were impressed by the strategy and vision articulated by the leaders of these IBM units. These leaders consistently emphasized the need to build end-to-end solutions and staged engagement methodologies that not only deliver the best in class technology solutions but also help clients with MEASURED MIGRATION as they modernize their datacenters or embark on the transition to cloud computing.

We heard these senior executives articulate the need for IT environments to be “tuned to task”, “optimized through comprehensive systems management”, “staged migration to private clouds and then seamlessly integrated with public clouds to manage spiky workloads”, etc. All this is critical for MEASURED MIGRATION.

In fact, at a later briefing, we learned that IBM has a growing Migration Services group that has grown by almost a factor of 10 in just these past 6 years or so. This “Migration Factory” is, we believe, a major driver of IBM’s substantial recent revenue growth across STG especially in the Linux/Unix market.

With thousands of successful migrations and competitive wins, we believe IBM and its ecosystem partners have the resources and track record to scale this MEASURED MIGRATION to the cloud. It’s a strategy that will ultimately – over the next decade or more – transition a significant part of today’s IT investments on our earth to the clouds!

Why engineering needs high performance cloud solutions

The design and engineering function in companies is in crisis. The engineering community must deliver designs better, faster and cheaper; design high quality products with lesser designers and across a distributed ecosystem of partners; and respond to increased CIO cost control of engineering IT.

And they must do all of this in an operational reality of siloed data centers tied to projects and locations; limited or poor operational insight; underutilized resources still missing peak demands; and designers tied to local desk side workstations and with limited collaboration.

The way to overcome these issues is to transform siloed environments into shared engineering clouds – private and private-hosted initially and transitioning to public over time. To achieve this, engineering functions require interactive and batch remote access; shared and centralized engineering IT; and an integrated business and technical environment.

This will unlocks designer skills from a location; provide greater access to compute and storage resources; align resource to project priorities; and realize improved operational efficiency and competitive cost savings.

Branding: It’s the Muscle in Marketing

“In the end there is no brand until the clients recognize it after it has been marketed.” So said a dear friend as we debated start-up priorities. To him, branding appears to be fluff and of no consequence to a start-up. But even friends as smart as this one can be wrong. In this case, my friend has confused brand image with the branding process.

He’s absolutely right that brand image exists solely in customer minds, the result of repeated experiences with the brand, and marketing is generally key to building brand awareness, if not brand preference.

But there can be no effective marketing until you’ve laid the strategic base for it with branding.

Marketing consists of techniques and tactics that drive sales. Branding is the business strategy that grows out of the art of differentiating your business, your product or service in a compelling way. It’s the strategic process that gives direction to marketing and makes differentiation actionable. Without differentiation you’re just a commodity with no choice but to sell on price.

“Think Different” epitomizes the branding strategy that enabled a computer hardware company to turn its reputation around by differentiating it from dozens of competitors. Establishing a brand image as an innovator and supporting that with one innovative product after the other turned Apple’s business around beginning in 1997. The ad slogan would be no more than clever words without the brand strategy behind it.

Blue Ribbon Sports was just another athletic shoe company until it developed the branding strategy that made it the company we know of as Nike today. Nike was the Greek goddess of victory, and that was the image the company used to differentiate its products – first, by naming its first shoe design Nike, then changing the company name to Nike and, soon after, beginning to sponsor professional athletes. Marketing led to the great “Just Do It” ad slogan in 1988 and the rest is history.

Cadabra, one of the early online bookstores, designed a strategy of diversification, choosing to symbolize its vision with the name of the world’s largest river, Amazon, and then living up to that name to become the country’s largest retailer.

So, think different – just do it – become the biggest, the best or both by differentiating your business with the right brand strategy and then implementing this with effective marketing.