Tag Archives: cloud

Microsoft needs SDN for Azure cloud

Microsoft needs SDN for Azure cloud

Couldn’t scale without it, Azure CTO says
The Microsoft cloud, through which the company’s software products are delivered, has 22 hyper-scale regions around the world. Azure storage and compute usage is doubling every six months, and Azure lines up 90,000 new subscribers a month.

Six TED Talks that can change your career
Of the hundreds of TED talks available online, many are geared toward helping people view life in a new

Fifty-seven percent of the Fortune 500 use Azure and the number of hosts quickly grew from 100,000 to millions, said CTO Mark Russinovich during his Open Network Summit keynote address here this week. Azure needs a virtualized, partitioned and scale-out design, delivered through software, in order to keep up with that kind of growth.

“When we started to build these networks and started to see these types of requirements, the scale we were operating at, you can’t have humans provisioning things,” Russinovich said. “You’ve got to have systems that are very flexible and also delivering functionality very quickly. This meant we couldn’t go to the Web and do an Internet search for a scalable cloud controller that supports this kind of functionality. It just didn’t exist.”

Microsoft wrote all of the software code for Azure’s SDN. A description of it can be found here.
Microsoft uses virtual networks (Vnets) built from overlays and Network Functions Virtualization services running as software on commodity servers. Vnets are partitioned through Azure controllers established as a set of interconnected services, and each service is partitioned to scale and run protocols on multiple instances for high availability.

Controllers are established in regions where there could be 100,000 to 500,000 hosts. Within those regions are smaller clustered controllers which act as stateless caches for up to 1,000 hosts.
Related

Why is Microsoft killing off Internet Explorer?
Microsoft builds these controllers using an internally developed Service Fabric for Azure. Service Fabric has what Microsoft calls a microservices-based architecture that allows customers to update individual application components without having to update the entire application.

Microsoft makes the Azure Service Fabric SDK available here.
Much of the programmability of the Azure SDN is performed on the host server with hardware assist. A Virtual Filtering Platform (VFP) in Hyper-V hosts enable Azure’s data plane to act as a Hyper-V virtual network programmable switch for network agents that work on behalf of controllers for Vnet and other functions, like load balancing.

Packet processing is done at the host where a NIC with a Field Programmable Gate Array offloads network processing from the host CPU to scale the Azure data plane from 1Gbps to 40Gbps and beyond. That helps retain host CPU cycles for processing customer VMs, Microsoft says.

Remote Direct Memory Access is employed for the high-performance storage back-end to Azure.
Though SDNs and open source go hand-in-hand, there’s no open source software content in the Azure SDN. That’s because the functionality required for Azure was not offered through open source communities, Russinovich says.

“As these requirements were hitting us, there was no open source out there able to meet them,” he says. “And once you start on a path where you’re starting to build out infrastructure and system, even if there’s something else that comes along and addresses those requirements the switching cost is pretty huge. It’s not an aversion to it; it’s that we haven’t seen open source out there that really meets our needs, and there’s a switching cost that we have to take into account, which will slow us down.”

Microsoft is, however, considering contributing the Azure Service Fabric architecture to the open source community, Russinovich said. But there has to be some symbiosis.

“What’s secret sauce, what’s not; what’s the cost of contributing to open source, what’s the benefit to customers of open source, what’s the benefit to us penetrating markets,” he says. “It’s a constant evaluation.”

Some of the challenges in constructing the Azure SDN were retrofitting existing controllers into the Service Fabric, Russinovich says. That resulted in some scaling issues.
Resources

7 Critical Questions to Demystify DRaaS
“Some of the original controllers were written not using Service Fabric so they were not microservice oriented,” he says. “We immediately started to run into scale challenges with that. Existing ones are being (rewritten) onto Service Fabric.

“Another one is this evolution of the VFP and how it does packet processing. That is not something that we sat down initially and said, ‘it’s connections, not flows.’ We need to make sure that packet processing on every packet after the connection is set up needs to be highly efficient. It’s been the challenge of being able to operate efficiently, scale it up quickly, being able to deliver features into it quickly, and being able to take the load off the server so we can run VMs on it.”

What’s next for the Azure SDN? Preparing for more explosive growth of the Microsoft cloud, Russinovich says.

“It’s a constant evolution in terms of functionality and features,” he says. “You’re going to see us get more richer and powerful abstractions at the network level from a customer API perspective. We’re going to see 10X scale in a few years.”

MCTS Training, MCITP Trainnig

Best Microsoft MCTS Certification, Microsoft MCITP Training at certkingdom.com

2014’s most significant cloud deals have OpenStack at heart

The most important cloud acquisitions this year have one thing in common: OpenStack.

2014’s slate of cloud deals reflect a few important trends in the market for the open source cloud software. One is that traditional enterprise vendors continue to see potential in OpenStack and they’re willing to shell out the cash to buy the expertise and technology they need to pursue the market.

The second is that despite interest from those big vendors, actual adoption of OpenStack hasn’t happened as quickly as some people might have hoped. The result is that some of the startups, even trendsetters like Cloudscaling, are open to acquisition as they realize they may not be able to make it on their own.

ALSO ON NETWORK WORLD 10 of the Most Useful Cloud Databases

The impact of these deals is still unknown. On the downside, the acquiring vendors all have other flagship businesses they need to protect. In many cases, that means they’ll limit customers of their new OpenStack products and services to using their legacy products. The result is users won’t have as much choice as they might like.

The upside, however, is that the traditional vendors know how to ship stable, well-supported products. That’s a plus for any business that’s been reluctant to go with an OpenStack startup.

Here are the top five cloud deals of the year, so far:

EMC buys Cloudscaling for an unconfirmed $50 million

The rationale: With more workloads moving to the cloud, EMC knows its storage products have to be in the running for businesses building cloud operations. While EMC is an obvious option for VMware shops given that it owns VMware, it’s not always top of mind in the open source world. With its platform for building private OpenStack clouds, Cloudscaling gives EMC a foot in the door in the OpenStack community.

It remains to be seen if a culture clash will lead to hiccups, however. Cloudscaling, with its outspoken founder Randy Bias, has a reputation as a scrappy upstart. EMC, on the other hand, is more of a staid, traditional vendor.

Who cares? It’s possible that Cloudscaling won’t be quite so open once it gets absorbed by EMC.Cloudscaling currently names EMC competitors including Dell, HP and Supermicro as partners on its web site, and Nexenta’s CEO is on Cloudscaling’s board. Also, Cloudscaling’s platform allows users to build hybrid clouds with Amazon Web Services and Google Cloud Platform. Given that those businesses compete with EMC or VMware in some way, it wouldn’t be a surprise if EMC restricts Cloudscaling’s openness in the future. That could be a disappointment for potential Cloudscaling users.

Impact: With the backing of a giant like EMC, Cloudscaling is likely to stabilize and become more attractive to enterprises. But being backed by a giant often means slower innovation. Combined with the potential for less choice for users, this deal slightly tips negative in terms of potential benefit to users.

HP buys Eucalytpus for an unconfirmed $100 million

The rationale: HP’s press release about the deal focused heavily on the fact that Marten Mickos, Eucalyptus’s CEO, will run HP’s cloud business. There was essentially no mention of Eucalyptus’s technology – a private cloud platform that’s compatible with AWS. It’s hard not to think that HP bought Eucalyptus primarily to get Mickos, who was also previously CEO of MySQL.

Who cares? If Mickos gets his way, users might get a unique and valuable capability. In an interview on the day the acquisition was announced, Mickos said his hope was to use Eucalyptus technology to bring AWS compatibility to HP’s OpenStack cloud products. That could be attractive for businesses that want to build private OpenStack clouds that burst to AWS when additional resources are needed.

Impact: The fact that Eucalyptus couldn’t go it alone seems to prove that a community-based open source project like OpenStack has a better chance of success than an open source platform driven by one company, like Eucalyptus. Chalk this up as a win for the OpenStack community.

Cisco bought Metacloud for an undisclosed sum

The rationale: With Metacloud, Cisco gets a unique technology that delivers an OpenStack private cloud as a service, remotely managing the cloud for customers. Cisco has actually had its own OpenStack distribution for years, but you’d be forgiven for not knowing it existed. The Metacloud deal lets Cisco sell customers server hardware combined with a well-known platform for running a cloud.

Who cares? VMware might. Cisco and VMware have had a curious relationship over the past few years, at one moment, partners, and the next, competitors.

For instance, Cisco, EMC and VMware started VCE, which offers packaged compute, storage and networking from those three companies. (Just this week Cisco reduced its stake in VCE to 10%.) Cisco also makes it easy for users of its server hardware to run VMware’s cloud products. With Metacloud, however, Cisco now opens the door for customers to go OpenStack instead of VMware.

Impact: Customers and potential customers lose another independent service provider, which offered users lots of choice, but gains a backer determined to be successful in OpenStack. This one is a wash.

Red Hat buys eNovance for $95 million

The rationale: Red Hat wants to dominate OpenStack and with eNovance it gains deployment expertise, since eNovance is in the business of helping customers build OpenStack clouds.

Who cares? While eNovance was open to using the best technology to meet a customer’s needs, including sometimes recommending AWS instead of an OpenStack cloud, that’s likely to change under Red Hat. For instance, eNovance will surely steer customers to Red Hat’s OpenStack distribution rather than any available from competitors.
No deal!

This year has also been a year of cloud-related acquisitions that didn’t end up happening. For instance, for months there was buzz around Rackspace looking for a buyer. Eventually though, Rackspace said it had decided to continue go it alone.

There were also rumors about EMC wanting to acquire HP. There’s more to both companies than the cloud, but after HP’s announcement of earmarking $1 billion for OpenStack, it’s clear the cloud is becoming an important business for the company.

Impact: The presumed loss of choice for eNovance customers pushes this deal into the negative column for customers.

Red Hat buys Inktank for $175 million
The rationale: Adding Inktank’s Ceph object and block storage software to its existing Gluster file system storage gives Red Hat a more complete portfolio of storage offerings. Also, as Ceph is popular among OpenStack users, the deal makes sense as part of Red Hat’s enthusiastic support of OpenStack.

Who cares? Red Hat tends to do its best to herd customers exclusively toward its own products, but it has pledged to keep Ceph open. For instance, Red Hat has said that Ceph will continue to run on non-Red Hat operating systems.

Impact: If Red Hat does indeedallow Ceph to continue to support non-Red Hat products, this deal should be a solid win for OpenStack users. Ceph has proved valuable to the OpenStack community and can benefit from Red Hat’s experience running open source projects and delivering open source products.



MCTS Training, MCITP Trainnig

Best Microsoft MCTS Certification, Microsoft MCITP Training at certkingdom.com

 

 

Weighing the IT implications of implementing SDNs

Software-defined anything has myriad issues for data centers to consider before implementation

Software Defined Networks should make IT execs think about a lot of key factors before implementation.

Issues such as technology maturity, cost efficiencies, security implications, policy establishment and enforcement, interoperability and operational change weigh heavily on IT departments considering software-defined data centers. But perhaps the biggest consideration in software-defining your IT environment is, why would you do it?
Prove to me there’s a reason we should go do this, particularly if we already own all of the equipment and packets are flowing. We would need a compelling use case for it.
— Ron Sackman, chief network architect at Boeing

“We have to present a pretty convincing story of, why do you want to do this in the first place?” said Ron Sackman, chief network architect at Boeing, at the recent Software Defined Data Center Symposium in Santa Clara. “If it ain’t broke, don’t fix it. Prove to me there’s a reason we should go do this, particularly if we already own all of the equipment and packets are flowing. We would need a compelling use case for it.”

[WHERE IT’S ALL GOING: VMware adds networking, storage to its virtual data center stack]

And if that compelling use case is established, the next task is to get everyone onboard and comfortable with the notion of a software-defined IT environment.

“The willingness to accept abstraction is kind of a trade-off between control of people and hardware vs. control of software,” says Andy Brown, Group CTO at UBS, speaking on the same SDDC Symposium panel. “Most operations people will tell you they don’t trust software. So one of the things you have to do is win enough trust to get them to be able to adopt.”

Trust might start with assuring the IT department and its users that a software-defined network or data center is secure, at least as secure as the environment it is replacing or founded on. Boeing is looking at SDN from a security perspective trying to determine if it’s something it can objectively recommend to its internal users.

“If you look at it from a security perspective, the best security for a network environment is a good design of the network itself,” Sackman says. “Things like Layer 2 and Layer 3 VPNs backstop your network security, and they have not historically been a big cyberattack surface. So my concern is, are the capex and opex savings going to justify the risk that you’re taking by opening up a bigger cyberattack surface, something that hasn’t been a problem to this point?”

Another concern Sackman has is in the actual software development itself, especially if a significant amount of open source is used.

“What sort of assurance does someone have – particularly if this is open source software – that the software you’re integrating into your solution is going to be secure,” he asks. “How do you scan that? There’s a big development time security vector that doesn’t really exist at this point.”

Policy might be the key to ensuring security and other operational aspects in place pre-SDN/SDDC are not disrupted post implementation. Policy-based orchestration, automation and operational execution is touted as one of SDN’s chief benefits.

“I believe that policy will become the most important factor in the implementation of a software-defined data center because if you build it without policy, you’re pretty much giving up on the configuration strategy, the security strategy, the risk management strategy, that have served us so well in the siloed world of the last 20 years,” UBS’ Brown says.

Software Defined Data Center’s also promise to break down those silos through cross-function orchestration of the compute, storage, network and application elements in an IT shop. But that’s easier said than done, Brown notes – interoperability is not a guarantee in the software-defined world.

“Information protection and data obviously have to interoperate extremely carefully,” he says. The success of software defined workload management – aka, virtualization and cloud – in a way has created a set of children, not all of which can necessarily be implemented in parallel, but all of which are required to get to the end state of the software defined data center.

“Now when you think of all the other software abstraction we’re trying to introduce in parallel, someone’s going to cry uncle. So all of these things need to interoperate with each other.”

So are the purported capital and operational cost savings of implementing SDN/SDDCs worth the undertaking? Do those cost savings even exist?

Brown believes they exist in some areas and not in others.
We’ve got massive cost targets by the end of 2015 and if I were backing horses, my favorite horse would be software-defined storage rather than software-defined networks.
— Andy Brown

“There’s a huge amount of cost take-out in software-defined storage that isn’t necessarily there in SDN right now,” he said. “And the reason it’s not there in SDN is because people aren’t ripping out the expensive under network and replacing it with SDN. Software-defined storage probably has more legs than SDN because of the cost pressure. We’ve got massive cost targets by the end of 2015 and if I were backing horses, my favorite horse would be software-defined storage rather than software-defined networks.”

Sackman believes the overall savings are there in SDN/SDDCs but again, the security uncertainty may make those benefits not currently worth the risk.

“The capex and opex savings are very compelling, and there are particular use cases specifically for SDN that I think would be great if we could solve specific pain points and problems that we’re seeing,” he says. “But I think, in general, security is a big concern, particularly if you think about competitors co-existing as tenants in the same data center — if someone develops code that’s going to poke a hole in the L2 VPN in that data center and export data from Coke to Pepsi.

“We just won a proposal for a security operations center for a foreign government, and I’m thinking can we offer a better price point on our next proposal if we offer an SDN switch solution vs. a vendor switch solution? A few things would have to happen before we feel comfortable doing that. I’d want to hear a compelling story around maturity before we would propose it.”


MCTS Training, MCITP Trainnig

Best Microsoft MCTS Certification, Microsoft MCITP Training at certkingdom.com

 

 

Tech salaries jump 5.3%, bonuses flat

Tech salaries jump 5.3%, bonuses flat
Tech and engineering pros reported the largest salary jump in more than a decade, according to career site Dice

Average salaries for tech pros climbed 5.3% to $85,619 last year, up from $81,327 in 2011. It’s the largest salary jump in more than a decade, according to career site Dice, which specializes in jobs for tech and engineering professionals.

Entry level talent (two years or less experience) waited three years to see an increase in average annual pay — and the market made up for the stagnancy with an 8% year-over-year increase to $46,315. At the other end of the spectrum, average salaries for tech professionals with at least 15 years of experience topped six-figures for the first time, growing 4% to $103,012.


2013 JOB WATCH: Top 11 metro areas for tech jobs

“Employers are recognizing and adjusting to the reality of a tight market,” said Scot Melland, CEO of Dice Holdings, in a statement. “The fact is you either pay to recruit or pay to retain and these days, at least for technology teams, companies are doing both.”

Tech bonuses were slightly more frequent — 33% of respondents got one in 2012 compared to 32% in 2011 — but slightly less lucrative at an average of $8,636 (down from $8,769). [Related story: “Outlook for IT bonus pay murky”]
tech salaries

“In the early stages of the recovery, companies were staying flexible by using performance pay to reward their top performers,” Melland said. “Now, companies are writing the checks that will stick. With a 3.8% tech unemployment rate, no one wants to lose talent.”

By location, Pittsburgh tech pros saw the largest salary increase, up 18% year/year to $76,207. Six other cities also reported double-digit growth in salaries — which is the most ever registered by the Dice Salary Survey.

San Diego (+13% to $97,328)
St. Louis (+13% to $81,245)
Phoenix (+12% to $83,607)
Cleveland (+11% to $75,773
Orlando (+10% to $81,583)
Milwaukee (+10% to $81,670)

Silicon Valley remains the only market where tech professionals average six-figure salaries ($101,278).

Across the U.S., big data skills are in demand, as evidenced by $100,000+ salaries for tech pros who use Hadoop, NoSQL and MongoDB. By comparison, average salaries associated with cloud and virtualization are just under $90,000 and mobile salaries are closer to $80,000, Dice reports.

“We’ve heard it’s a fad, heard it’s hyped and heard it’s fleeting, yet it’s clear that data professionals are in demand and well paid,” said Alice Hill, managing director of Dice.com. “Tech professionals who analyze large data streams and strategically impact the overall business goals of a firm have an opportunity to write their own ticket. The message to employers? If you have a talented data team, hold on tight or learn the consequences of a lift-out.”

Looking ahead to the current year, 64% of tech professionals are confident they could find a favorable new job in 2013.

Dice surveyed 15,049 employed tech professionals between Sept. 24 and Nov. 16, 2012, for its annual Salary Survey.


MCTS Training, MCITP Trainnig

Best Microsoft MCTS Certification, Microsoft MCITP Training at certkingdom.com

What the cloud really means for your IT job

MCTS Training, MCITP Trainnig

Best Microsoft MCTS Certification, Microsoft MCITP Training at certkingdom.com

As companies adopt cloud services, is there more or less of a need for IT workers?

Depending on which survey or story you read, the cloud can be either a good thing for IT workers and their job security, or it can be terrifying.

For example, a study by Microsoft and IDC recently predicted that cloud computing will create 14 million jobs internationally by 2015. But those aren’t just IT jobs, they are jobs spread around the entire world, across all industries.

For IT shops, the news may not be as bright: A study by IT service provider CSC concluded that 14% of companies reduced their IT staff headcount after deploying a cloud strategy.

As businesses embrace the cloud, experts say there will still be a need for IT staff in the enterprise, but there will be a need for different types of IT workers. Instead of managing infrastructure, tending the help desk and commissioning server instances to be created, IT workers of tomorrow are instead more likely to be managing vendor relationships, working across departments and helping clients and workers integrate into the cloud.

RELATED: Study: Cloud will create 14 million jobs by 2014

SLIDESHOW: Forbes’ Top Tech Billionaires

“The No. 1 reason most enterprises are going to the cloud is cost savings,” says Phil Garland, of PricewaterhouseCooper’s CIO advisory business services unit. The largest line items in enterprise budgets are traditionally labor, so as enterprises deploy the cloud, it will reduce the number of staff needed, he says.

But, this doesn’t necessarily mean that IT jobs are gone with the wind. In fact, while 14% of businesses surveyed by CSC cut IT staff, another 20% actually increased staff.

“It really depends on what the enterprise is doing in the cloud,” Garland adds. “In most cases, it’s a shift of responsibilities instead of wholesale cutting or hiring.”

Take the example of Underwriters Laboratories in Illinois, a 9,000-person company that provides third-party inspection and certification services to more than 50,000 businesses around the world with its trademark UL symbol.

In August, the company transitioned from an in-house managed deployment of IBM communications systems Lotus Notes and Domino, to a cloud-based SaaS offering of Microsoft Office 365. “We needed something that would be much more elastic,” says CIO Christian Anschuetz. The company has executed a handful of mergers and acquisitions in recent years, and it expects more in the future. Anschuetz wanted a simpler way of deploying increased instances of communications systems without the need to add infrastructure to support it.

The migration to the cloud took about eight weeks and it created an almost immediate shift in the firm’s IT needs. UL no longer needed workers to manage its communications platform, email servers and chat functions. Despite cutting in those areas, Anschuetz says his investment in cloud personnel has tripled since the cloud adoption.

“Most people think that with such a deployment we would be drawing down our services to make them more cost-effective,” he says. “Our internal IT is growing.”

The greatest need for services in UL’s new system is for customer-facing employees that can help UL clients integrate into the company’s platform. As a firm that oversees product development and manufacturing, Anschuetz says customers want UL workers to be involved in the product lifecycle as early as possible. A cloud-based system, he says, allows UL to work more closely with customers on product development. Instead of a face-to-face meeting, or emailing documents back and forth, now documents are hosted in a cloud environment that both UL and the customer have access to, allowing for greater collaboration, he says. “UL has realized the elasticity that the cloud provides us is of great value in the marketplace,” he says. “It allows us to develop new applications and regenerate relationships with customers.” Because of the value it creates for the business, UL is adding workers that help manage the cloud integration efforts.

This is the reasoning IDC and Microsoft used in its study claiming the cloud will help create 14 million jobs in the next five years.

“By offloading services to the cloud, you increase the amount of budget you have for new projects and initiatives, which are the things that truly lead to new business revenues,” says John Gantz, an IDC research who studies technology economics.

Three-quarters of IT spending today, he says, is on legacy systems and upgrades, with the remainder on new products. If an enterprise cuts system management costs, that creates additional resources for new projects and initiatives, which drive revenues and can potentially create jobs. Although, Gantz stresses, those may not be in the IT department.

In the short term, cloud deployments can create an increased need for IT staff to manage the transition and monitor the new cloud system and vendors. In the long term, however, the cloud generally creates efficiencies and reduces IT staffing jobs in an enterprise, he says. On a macroeconomic level, Gantz doesn’t see the cloud having a macroeconomic impact. Some of the jobs lost in individual companies could be offset by increases in staffing needs by cloud vendors, he says.

David Moschella, global research director for the Leading Edge Forum at CSC, agrees that IT investments usually lead to a drop in staffing needs for a company.

“Businesses can be run with less people because of technology advancement,” he says.

Traditionally there has been an argument that when jobs are eliminated in one area, they can be increased in another. Moschella believes that will be the case, but he says it’s too early to tell exactly which areas will be the beneficiaries of the job boom the cloud can provide.

What is expected is that traditional IT roles of managing software and hardware will no longer be needed in the new cloud-heavy world.

MCTS Training, MCITP Trainnig

Best Microsoft MCTS Certification, Microsoft MCITP Training at certkingdom.com