Tag Archives: HP

HP Has Enough Workers to Fill a City—And It Needs Them All

Silicon Valley technology giant HP will lay off as many as 30,000 more people as part of its split into two separate companies, the company told analysts this week. This comes on top of the 55,000 jobs HP has been in the process of shedding in recent years. Even so, HP is still as large as a mid-sized US city. As of May, according to Forbes, the company numbered 302,000. But what in the world do all those people do?

In an era in which WhatsApp can serve 900 million users with just 50 engineers, the massive enterprise tech company feels like an anachronism. In HP’s case, its huge headcount doesn’t even include outsourced labor, such as call center operators or the assembly line workers who actually build all those printers and laptops. But it turns out that the business of selling technology to businesses has long required something old-fashioned: lots and lots of people (at least for now).

The typical consumer probably thinks of HP as a printer and PC company, but it’s much more than that. It’s also a massive information technology consulting operation with a large portfolio of business software and cloud computing offerings. HP’s forthcoming reorganization will create two businesses, one called called HP Enterprise, which will include its consulting and software businesses, and the other called HP Inc., which will continue to sell printers and PCs. The current round of layoffs are aimed at the HP Enterprise side. HP doesn’t break down how many employees work in each of its divisions, but HP Enterprise is likely where the bulk of its employees work, judging in part by the size of other large IT services companies (IBM had 379,592 employees last year; Accenture had 323,000).

Forrester vice president Peter Burris says the reason companies like HP and IBM need so many workers is that selling software to enterprise customers is far different from creating software for consumers.

All 900 million WhatsApp users use the exact same app. You download it an app store, and that’s that. But big companies like banks, insurance companies, and large hospitals need software tailored to their particular needs. Instead of just building the application once and selling it to a client, these companies and their clients have an ongoing relationship.

That’s because IT consultants aren’t just going in and telling a customer what to do. Typically, the consulting firm is involved in planning, building, maintaining, and supporting new software. That means talking with employees about what they need out of a new piece of software, working with other software vendors on integrations between products, training employees, and fielding tech support calls. And that takes a lot of people. Many of those consultants work with customers on an ongoing basis, limiting the number of different customers any one employee can work with. That’s the difference between making a software product like WhatsApp and selling consulting services.

“A product sale has a clear moment where a title is exchanged,” he says. “But with services, the sale happens over time. It’s a process, You’re literally transferring knowledge about how to solve problems.”

Who Needs an Army?

It’s easy to be skeptical about whether customers are really getting their money’s worth from big companies, considering that 68 percent of all large IT projects fail. Surely there are instances of a company overselling its services, or trying to save a doomed project by simply throwing more people at the problem. You can count on large bureaucracies to add inefficiencies and bloat to any project.

That’s starting to change, however. Yes, “cloud computing” is an over-broad term, but cloud-based services like Amazon Web Services and Salesforce have changed the way large companies do business. It’s easier than ever for a business manager to simply buy some software and have their employees start using it immediately.

In the past, even something as simple as an instant messaging application that integrates with your company’s project management system would have been an ordeal to implement. You would have had to negotiate a price for a piece of software like IBM’s Sametime, set up up a new server in your data center, install software on your employees’ desktops, and hire consultants to integrate your project management software with the instant messaging server.

Today, you could just sign up for Slack, a trendy workplace chat app, and start using it over the web immediately without ever having to talk with a salesperson. Slack comes with dozens of integrations with other applications right out of the box. It even has an application programming interface—API for short—that makes it easy for app developers to build support for Slack right into their own products. And Slack is hardly unique amongst new age business apps in offering easy integrations. Tools like Zapier make it easy for even non-programmers to stitch different applications together. The upshot is that, increasingly, you don’t need an army of consultants to get all your software up and running and working together.

Meanwhile, open source technology is making it easier to use freely available components, freeing software developers from building the same common features again and again. Cloud services and open source software were once most associated with small startups looking to save money. But as these startups—Facebook, for example—have grown into large enterprises, they’ve often stuck with these newer tools, and more established organizations are following suit.

IT’s Legacy

Of course not all of a company’s software can be replaced by off-the-shelf apps. And there are plenty of consultants that specialize in customizing cloud applications like Salesforce. But Burris points out that there’s little to no advantage to building custom software for many common business processes, such as financial reporting or accounts payable systems. A custom payroll app probably won’t make your company more competitive. So there’s a strong incentive to simply move over to one-size fits all business applications that can be supported in much the same way WhatsApp is.

The HPs and IBMs of the world have responded to these shifts by offering cloud services and ready-made business applications of their own. That’s a big part of why HP and IBM are shedding jobs right now. “In general software companies are better for owners than services businesses are,” Burris explains. “In a software business, a programmer can write a piece of code that can be used by millions of different customers and users. That intellectual property, that information about a problem, is now made available to a whole pile of people at the same time.”

But the good news for the armies of consultants working for these companies is that most older companies that still have enormous amounts of data stored in old software—what people in the IT business call “legacy” systems. It will take countless hours to modernize all of those legacy systems—and, Burris says the place most of these companies are going to turn are the legacy tech giants—companies like HP and IBM—that helped build a lot of these systems in the first place

HP Will Release a “Revolutionary” New Operating System in 2015

Hewlett-Packard will take a big step toward shaking up its own troubled business and the entire computing industry next year when it releases an operating system for an exotic new computer.

The company’s research division is working to create a computer HP calls The Machine. It is meant to be the first of a new dynasty of computers that are much more energy-efficient and powerful than current products. HP aims to achieve its goals primarily by using a new kind of computer memory instead of the two types that computers use today. The current approach originated in the 1940s, and the need to shuttle data back and forth between the two types of memory limits performance.

“A model from the beginning of computing has been reflected in everything since, and it is holding us back,” says Kirk Bresniker, chief architect for The Machine. The project is run inside HP Labs and accounts for three-quarters of the 200-person research staff. CEO Meg Whitman has expanded HP’s research spending in support of the project, says Bresniker, though he would not disclose the amount.

The Machine is designed to compete with the servers that run corporate networks and the services of Internet companies such as Google and Facebook. Bresniker says elements of its design could one day be adapted for smaller devices, too.

HP must still make significant progress in both software and hardware to make its new computer a reality. In particular, the company needs to perfect a new form of computer memory based on an electronic component called a memristor (see “Memristor Memory Readied for Production”).

A working prototype of The Machine should be ready by 2016, says Bresniker. However, he wants researchers and programmers to get familiar with how it will work well before then. His team aims to complete an operating system designed for The Machine, called Linux++, in June 2015. Software that emulates the hardware design of The Machine and other tools will be released so that programmers can test their code against the new operating system. Linux++ is intended to ultimately be replaced by an operating system designed from scratch for The Machine, which HP calls Carbon.

Programmers’ experiments with Linux++ will help people understand the project and aid HP’s progress, says Bresniker. He hopes to gain more clues about, for example, what types of software will benefit most from the new approach.

The main difference between The Machine and conventional computers is that HP’s design will use a single kind of memory for both temporary and long-term data storage. Existing computers store their operating systems, programs, and files on either a hard disk drive or a flash drive. To run a program or load a document, data must be retrieved from the hard drive and loaded into a form of memory, called RAM, that is much faster but can’t store data very densely or keep hold of it when the power is turned off.

HP plans to use a single kind of memory—in the form of memristors—for both long- and short-term data storage in The Machine. Not having to move data back and forth should deliver major power and time savings. Memristor memory also can retain data when powered off, should be faster than RAM, and promises to store more data than comparably sized hard drives today.

The Machine’s design includes other novel features such as optical fiber instead of copper wiring for moving data around. HP’s simulations suggest that a server built to The Machine’s blueprint could be six times more powerful than an equivalent conventional design, while using just 1.25 percent of the energy and being around 10 percent the size.

HP’s ideas are likely being closely watched by companies such as Google that rely on large numbers of computer servers and are eager for improvements in energy efficiency and computing power, says Umakishore Ramachandran, a professor at Georgia Tech. That said, a radical new design like that of The Machine will require new approaches to writing software, says Ramachandran.

There are other prospects for reinvention besides HP’s technology. Companies such as Google and Facebook have shown themselves to be capable of refining server designs. And other new forms of memory, all with the potential to make large-scale cloud services more efficient, are being tested by researchers and nearing commercialization (see “Denser, Faster Memory Challenges Both DRAM and Flash” and “A Preview of Future Disk Drives”).

“Right now it’s not clear what technology is going to become useful in a big way,” says Steven Swanson, an associate professor at the University of California, San Diego, who researches large-scale computer systems.

HP may also face skepticism because it has fallen behind its own timetable for getting memristor memory to market. When the company began working to commercialize the components, together with semiconductor manufacturer Hynix, in 2010, the first products were predicted for 2013 (see “Memristor Memory Readied for Production”).

Today, Bresniker says the first working chips won’t be sent to HP partners until 2016 at the earliest.

HP Puts the Future of Computing On Hold

Plans by Hewlett-Packard for computers based on an exotic new electronic device called the memristor are scaled back.

In April I wrote about an ambitious project by Hewlett-Packard to use an electronic device for storing data called the memristor to reinvent the basic design of computers (see “Machine Dreams”). This week HP chief technology officer Martin Fink, who started and leads the project, announced a rethink of the project amidst uncertainty over the memristor’s future.

Fink and other HP executives had previously estimated that they would have the core technologies needed for the computer they dubbed “the Machine” in testing sometime in 2016. They used the timeline at the bottom of this post to sketch out where the project was headed.

But the New York Times reported yesterday that the project has been “repositioned” to focus on delivering the Machine using less exotic memory technologies–the DRAM found in most computers today and a technology just entering production called phase change memory, which stores data by melting a special material and controlling how it cools.

With memristors out of the picture, there’s reason to doubt just how revolutionary HP’s project can be.

The main feature of the Machine’s design was to be a large collection of memristor memory chips. They would allow computers to be more powerful and energy efficient by combining the best properties of two different components of today’s machines: the speed of the DRAM that holds data while a processor uses it, and the capacity and ability to hold data without power seen in storage drives based on hard disks or flash memory.

Prototypes of the Machine built with DRAM and phase change memory in the place of memristors had always been part of the plan. But when I met Fink and others working on the project I also heard that those technologies would hobble the idea at the heart of the Machine.

Because DRAM can’t store data very densely and must always be powered on, computers built around a large block of it will require a lot of space and power. Meanwhile, phase change memory is too slow compared to DRAM to be much use for data being worked on. When I met Stan Williams, who leads HP’s work on memristors, he dismissed the idea that any other technology could be used to reinvent the basic design of computers as HP wanted. Fink did a good job in this 2014 blog post of explaining why his team believed only memristors could build the Machine.

Still, this week’s climb down is not a complete surprise. Fink used the timeline below as recently as December 2014, predicting that memristor memory would “sample” in 2015 and be “launched” in 2016. But a few months later, in February of this year, he told me that sampling was most likely in 2016–an estimate that HP’s manufacturing partner SK Hynix would not confirm. Microelectronics experts I spoke to said that it looked to be challenging to make reliable memristors in large, dense arrays as needed to make a memory chip.

HP now appears to be avoiding making any prediction for when the technology will be mature. The company has not yet responded to a request for comment.