The evolution of enterprise technology mandates that we rethink our approach to IT security. The security environment we live in today is simply not suited to the traditional concept of the “perimeter,” the boundary between your enterprise and the rest of the world.
The term’s military origin itself is revealing. Like IT security managers, soldiers have typically relied on “defense in depth.” An army might put barbed wire at the perimeter of a troop position, with a minefield surrounding the inner position. Anyone who made it through the barbed wire would have to avoid injury to make it to the main target. The enterprise has been secured in this way as well. Physical servers are like the main position, the target. Firewalls and intrusion detections provided a secure perimeter, like the barbed wire. Read more…
Businesses and IT are in the midst of an unprecedented digital transformation. Everything from Big Data to the Internet of Things (IoT), Cloud and advancements in network technologies are changing the way things work and enabling new customer experiences.
Networks have traditionally been viewed as low-level infrastructure, but in today’s world, they are the fundamental enabler of new ways to stay connected to customers and the foundation of new, disruptive business models.
Traditionally, many products have been sold through distributors and retailers to customers in one-time transactions that were often anonymous, at least to the manufacturer. Today, however, smart, digital products and services connected over a variety of networks can be used to maintain a relationship with the customer from the time of sale, onwards, and beyond the point of sale to just about anywhere.
As we head to the SAP Best Practices for Oil & Gas conference in Houston, we should recall Peter Sondergaard’s remarks from 2011. The Gartner SVP said, “Information is the oil of the 21st century and analytics is the combustion engine.” He was speaking metaphorically, but the Oil & Gas industry knows very well how important information is to their success. When Sondergaard made that statement, West Texas Intermediate Crude (WTI) was trading at $90 a barrel. Today, it’s now hovering around $40. That sort of price volatility defines the industry. Yet, data, analytics and reporting can mitigate the risks of the economic environment.
Big Data…Cloud…IoT. The terms are enough to keep any tech-savvy CIO up at night. The flood of digital information, coupled with these game-changing trends, are putting pressure on IT to quickly incorporate – or be left behind. As IT transforms into a profit-center, significant cost and efficiency savings can be realized if incorporated correctly. Adding to this demand, the bar for great customer service in this new digital age is being set even higher.
In 1790, the world was in the midst of the first industrial revolution. The evolution from hand production to machines had an impact on every industry – from transportation to mining. For those who recognized the trend, it meant faster production, greater profits, and better quality. While we’ve advanced significantly since those days, companies are again on the verge of a new revolution having a similar impact. Read more…
A series on CenturyLink’s Information Technology team’s move into the cloud
As part of our journey to the cloud, our CenturyLink IT team has made a commitment to migrate 90% of our strategic applications to the cloud. The foundation for this is our long-standing strategic plan for our application portfolio that we call “cap and grow”. Driven by both natural IT strategic evolution and numerous acquisitions, we’re “capping” our investments in many legacy systems and retiring others, while “growing” our investments in existing and new, strategic applications.
A series on CenturyLink’s Information Technology team’s move into the cloud
Let’s talk about molehills. As we transform CenturyLink from a traditional telecom to a network and cloud provider, our IT department has embarked on a parallel journey to cloud computing. We’re moving up the stack and becoming more of an internal solution provider and less of an operations-only unit. It’s exciting, but also a little daunting. Each time we see a potential problem, we wonder, “Are we making a mountain out of a molehill?” Often, we are.
My advice to other IT teams moving to the cloud is to watch out for those molehills. Moles either dig helpful holes that aerate the soil or they eat your precious crops. Some problems are real. Others just look bad. Some present opportunities to do things better. All of these challenges involve people, processes and technology. So it is with perceived challenges that involve people, processes, and technology that we migrate to the cloud.
People are a key factor in moving to the cloud. For example, development operations responsibilities are often segregated between IT operations and the development team. With the cloud, those responsibilities get blurred. The whole dev-test-deploy model gets twisted around – in a good way. If you work with your people on how cloud transforms development and operations, you’ll be able to get to a point where you can actually develop, test, and get projects in front of the customer more quickly, and, frankly, create a partnership. This is quite different from the traditional approach of development where you go get the requirements, go off for several months and then come back with a product that’s may not be not what the users expect or need months on.
In terms of process, moving into the cloud forces you to rethink many traditional decisions. For instance, the cloud makes you think differently about standardization. When you’re running your own hardware, you pay a lot of attention to standardizing servers, operating systems, databases, and so forth because variation is costly to support. When you’re in the cloud, many of these issues become less relevant. You’re not touching the machines. Operating systems are running on virtual machines with automated configuration. Variation is a lot simpler to manage. We can get out of “everything must be standardized” autopilot mode – which often enables us to provide more choice and flexibility to the business.
The cloud also forces a different perspective on end of life process issues. It makes you ask, “Do I really want to own all these resources in a data center that carries high overhead? Do I want to spend a lot of my time and energy focused on facilities as opposed to the business that I’m actually trying to run?” One of the best things about the cloud environment is it allows you to make end of service life lower on the priority list than business initiatives because the process becomes part of the DNA on your environment.
Perceived technological challenges arise over matters such as disaster recovery (DR). The cloud should make you re-asses what you’re doing with DR. The cloud makes it easier and harder at the same time. Traditionally, we thought about DR in terms of recovery time objectives, offsite tape backup data storage, hot sites vs. warm sites – basically how will we triage the business systems if we have an outage? With the cloud, the barriers to creating a “hot site” are much lower – you can run mirrored environments with an ease that would have been hard to imagine a few years ago. However, that said, you still have to put together a coherent DR plan that fits the new parameters.
Or, consider security. Some perceive security as a show stopper for the cloud, but it really shouldn’t be. Nobody wants their company to be shown on CNN suffering from a security breach. And, of course, some data simply cannot go off premises, but most can. The cloud made us review our security policies and update them for the cloud. For example, in the cloud, system components might be spread around a different security perimeter. The pace of application development and iterations of code integration can be so much faster in the cloud that the traditional security audit process probably isn’t going to work. Add to that the openness of the new APIs that can connect your apps to pretty much anything in the universe, and you’ve got a new reality in security. These issues can be addressed, but they must be identified first.
Then, think about your applications. Not all of them are ready for the cloud, whether it’s a technical issue or licensing limitation. This is okay. Not everything is headed to the cloud, anyway. In our bi-modal IT strategy, we are placing certain legacy applications into what we call our “cap” category. We’re capping our investments in these systems because we do not think they have a future in the cloud with us. For those applications that we want to take to the cloud, we are working with our software partners to make sure that that they’re embracing this journey to the cloud as we are.
IT departments can move to the cloud successfully, CenturyLink is just one example. It is possible to overcome perceived challenges to the cloud if we focus on people, process, and technology. For each, the cloud forces a reassessment of what we’ve been doing. The cloud pushes us to rethink our assumptions about how people, process, and technology in IT support the business. There are plenty of molehills along the way. The trick is to figure out if they’re the kind that help or hurt – and fill them in so they don’t grow into mountainous obstacles on the way to the cloud.