As containers take off, so do security concerns

Containers offer a quick and easy way to package up applications but security is becoming a real concern

Containers offer a quick and easy way to package up applications and all their dependencies, and are popular with testing and development.

According to a recent survey sponsored by container data management company Cluster HQ, 73 percent of enterprises are currently using containers for development and testing, but only 39 percent are using them in a production environment.

But this is changing, with 65 percent saying that they plan to use containers in production in the next 12 months, and cited security as their biggest worry. According to the survey, just over 60 percent said that security was either a major or a moderate barrier to adoption.
MORE ON CSO: The things end users do that drive security teams crazy

Containers can be run within virtual machines or on traditional servers. The idea is somewhat similar to that of a virtual machine itself, except that while a virtual machine includes a full copy of the operating system, a container does not, making them faster and easier to load up.

The downside is that containers are less isolated from one another than virtual machines are. In addition, because containers are an easy way to package and distribute applications, many are doing just that — but not all the containers available on the web can be trusted, and not all libraries and components included in those containers are patched and up-to-date.

According to a recent Red Hat survey, 67 percent of organizations plan to begin using containers in production environments over the next two years, but 60 percent said that they were concerned about security issues.
Isolated, but not isolated enough

Although containers are not as completely isolated from one another as virtual machines, they are more secure than just running applications by themselves.

“Your application is really more secure when it’s running inside a Docker container,” said Nathan McCauley, director of security at Docker, which currently dominates the container market.
MORE ON NETWORK WORLD: 12 Free Cloud Storage options

According to the Cluster HQ survey, 92 percent of organizations are using or considering Docker containers, followed by LXC at 32 percent and Rocket at 21 percent.

Since the technology was first launched, McCauley said, Docker containers have had built-in security features such as the ability to limit what an application can do inside a container. For example, companies can set up read-only containers.

Containers also use name spaces by default, he said, which prevent applications from being able to see other containers on the same machine.

“You can’t attack something else because you don’t even know it exists,” he said. “You can even get a handle on another process on the machine, because you don’t even know it’s there.”

White Paper
Buying into Mobile Security
White Paper
How secure is your email? Prevent Phishing & Protect Your Customers Post Data Breach

See All

However, container isolation doesn’t go far enough, said Simon Crosby, co-founder and CTO at security vendor Bromium.

“Containers do not make a promise of providing resilient, multi-tenant isolation,” he said. “It is possible for malicious code to escape from a container to attack the operation system or the other containers on the machine.”

If a company isn’t looking to get maximum efficiency out of its containers, however, it can run just one container per virtual machine.

This is the case with Nashua, NH-based Pneuron, which uses containers to distribute its business application building blocks to customers.

“We wanted to have assigned resourcing in a virtual machine to be usable by a specific container, rather than having two containers fight for a shared set of resources,” said Tom Fountain, the company’s CTO. “We think it’s simpler at the administrative level.”

Plus, this gives the application a second layer of security, he said.

“The ability to configure a particular virtual machine will provide a layer of insulation and security,” he said. “Then when we’re deployed inside that virtual machine then there’s one layer of security that’s put around the container, and then within our own container we have additional layers of security as well.”

But the typical use case is multiple containers inside a single machine, according to a survey of IT professionals released Wednesday by container security vendor Twistlock.

Only 15 percent of organizations run one container per virtual machine. The majority of the respondents, 62 percent, said that their companies run multiple containers on a single virtual machine, and 28 percent run containers on bare metal.

And the isolation issue is still not figured out, said Josh Bressers, security product manager at Red Hat.

“Every container is sharing the same kernel,” he said. “So if someone can leverage a security flaw to get inside the kernel, they can get into all the other containers running that kernel. But I’m confident we will solve it at some point.”

Bressers recommended that when companies think about container security, they apply the same principles as they would apply to a naked, non-containerized application — not the principles they would apply to a virtual machine.

“Some people think that containers are more secure than they are,” he said.
Vulnerable images

McCauley said that Docker is also working to address another security issue related to containers — that of untrusted content.

According to BanyanOps, a container technology company currently in private beta, more than 30 percent of containers distributed in the official repositories have high priority security vulnerabilities such as Shellshock and Heartbleed.

Outside the official repositories, that number jumps to about 40 percent.

Of the images created this year and distributed in the official repositories, 74 percent had high or medium priority vulnerabilities.

“In other words, three out of every four images created this year have vulnerabilities that are relatively easy to exploit with a potentially high impact,” wrote founder Yoshio Turner in the report.

In August, Docker announced the release of the Docker Content Trust, a new feature in the container engine that makes it possible to verify the publisher of

“It provides cryptographic guarantees and really leapfrogs all other secure software distribution mechanisms,” Docker’s McCauley said. “It provides a solid basis for the content you pull down, so that you know that it came from the folks you expect it to come from.”

Red Hat, for example, which has its own container repository, signs its containers, said Red Hat’s Bressers.

“We say, this container came from Red Hat, we know what’s in it, and it’s been updated appropriately,” he said. “People think they can just download random containers off the Internet and run them. That’s not smart. If you’re running untrusted containers, you can get yourself in trouble. And even if it’s a trusted container, make sure you have security updates installed.”

According to Docker’s McCauley, existing security tools should be able to work on containers the same way as they do on regular applications, and also recommended that companies deploy Linux security best practices.

Earlier this year Docker, in partnership with the Center for Information Security, published a detailed security benchmark best practices document, and a tool called Docker Bench that checks host machines against these recommendations and generates a status report.

However, for production deployment, organizations need tools that they can use that are similar to the management and security tools that already exist for virtualization, said Eric Chiu, president and co-founder at virtualization security vendor HyTrust.

“Role-based access controls, audit-quality logging and monitoring, encryption of data, hardening of the containers — all these are going to be required,” he said.

In addition, container technology makes it difficult to see what’s going on, experts say, and legacy systems can’t cut it.

“Lack of visibility into containers can mean that it is harder to observe and manage what is happening inside of them,” said Loris Degioanni, CEO at Sysdig, one of the new vendors offering container management tools.

Another new vendor in this space is Twistlock, which came out of stealth mode in May.

“Once your developers start to run containers, IT and IT security suddenly becomes blind to a lot of things that happen,” said Chenxi Wang, the company’s chief strategy officer.

Say, for example, you want to run anti-virus software. According to Wang, it won’t run inside the container itself, and if it’s running outside the container, on the virtual machine, it can’t see into the container.

Twistlock provides tools that can add security at multiple points. It can scan a company’s repository of containers, it can scan containers just as they are loaded and prevent vulnerable containers from launching.

“For example, if the application inside the container is allowed to run as root, we can say that it’s a violation of policy and stop it from running,” she said.

Twistlock can monitor whether a container is communicating with known command-and-control hosts and either report it, cut off the communication channel, or shut down the container altogether.

And the company also monitors communications between the container and the underlying Docker infrastructure, to detect applications that are trying to issue privileged commands or otherwise tunnel out of the container.

Market outlook

According to IDC analyst Gary Chen, container technology is still new that most companies are still figuring out what value they offer and how they’re going to use them.

“Today, it’s not really a big market,” he said. “It’s still really early in the game. Security is something you need once you start to put containers into operations.”

That will change once containers get more widely deployed.

“I wouldn’t be surprised if the big guys eventually got into this marketplace,” he said.

More than 800 million containers have been downloaded so far by tens of thousands of enterprises, according to Docker.

But it’s hard to calculate the dollar value of this market, said Joerg Fritsch, research director for security and risk management at research firm Gartner.

“Docker has not yet found a way to monetize their software,” he said, and there are very few other vendors offering services in this space. He estimates the market size to be around $200 million or $300 million, much of it from just a single services vendor, Odin, formerly the service provider part of virtualization company Parallels.

With the exception of Odin, most of the vendors in this space, including Docker itself, are relatively new startups, he said, and there are few commercial management and security tools available for enterprise customers.

“When you buy from startups you always have this business risk, that a startup will change its identity on the way,” Firtsch said.


MCTS Training, MCITP Trainnig

Best Microsoft MCP Certification, Microsoft MCSE Training at

How to get security right when embracing rapid software development

Five steps to reduce risk while moving to continuous updates

Accelerated software development brings with it particular advantages and disadvantages. On one hand, it increases the speed to market and allows for fast, frequent code releases, which trump slow, carefully planned ones that unleash a torrent of features at once. Continuous release cycles also allow teams to fine-tune software. With continuous updates, customers don’t have to wait for big releases that could take weeks or months.

Embracing failure without blame is also a key tenet of rapid acceleration. Teams grow faster this way, and management should embrace this culture change. Those who contribute to accidents can give detailed accounts of what happened without fear of repercussion, providing valuable learning opportunities for all involved.

However, when things are moving as quickly as rapid acceleration allows, outages, security vulnerabilities and bugs become bigger concerns. Mistakes can occur, potentially leading to security problems. The upside: Automation of tasks can actually reduce mistakes and thus remove potential security issues.

When development is rushed without security awareness, wrong software, unencrypted apps, or insecure apps could be installed; audits and compliances could fail; intellectual property or private customer data may be leaked. Security is essential to the success of any development project — make it a priority.

How to Accelerate Safely
Minimize security concerns associated with rapid acceleration by talking to all stakeholders involved. Everyone needs to be brought into the discussion. Members of the development team, along with operations and security, should analyze the existing system and vocalize their visions for the new one prior to closing gaps with tools, automation and testing.

To implement a rapid approach to software development while reducing the potential risks, consider these five steps:

* Automate everything. Your team must take time to identify bottlenecks (the delivery process, infrastructure, testing, etc.) and find methods to automate anything that doesn’t need to be completed manually.

Consider establishing a system for continuous deployment. This allows automatic deployment of every software update to production and delivery. Continuous integration should also be a priority so changes and code added to the pipeline are automatically isolated, tested, and reported on before automation tools integrate code into the code base. Automation not only reduces waste in the process, but it also produces a repeatable process and outcome, which are squarely in the wheelhouse of security’s desires.

* Be agile but not unrealistic. Instead of spending an exorbitant amount of time on planning, flesh out the requirements and begin the process. Start by designating people to stay ahead of development, keep the project on track, and ensure deliverables are completed on schedule. Through it all, keep operations — and your company — transparent.

If someone runs in with a high-priority request, the project manager or product owner can say, “No, we can’t finish that in this sprint, but we can add it to the backlog with a high-priority mark and work it into an upcoming sprint.” Agile programming is a pull model, not a push model. Management needs to understand how this works and support it.

If the sprint’s allocated stories are completed early, more work can then be pulled in. That said, don’t let others push unplanned work on the team. Agile programming requires team agreement to complete a specific amount of work in a specific time frame.

* Work across departments. When departments move together rapidly, tensions will inevitably rise. Security should be brought into the fold so these issues don’t cause speed bumps. Sales teams, marketing teams, or teams invested in the end product need to have an equal seat at the table. Planning should be a collaborative effort among all stakeholders.

* Separate duties and systems. Often, as companies attempt to embrace rapid acceleration, a need for separation of duties may arise as just one of many compliance requirements. Only select employees should have access to production and test systems.

* Work as a team. Ensure everyone understands the company’s compliance and controls requirements. Be creative to ensure requirements are met without creating speed bumps. Also, consider how controls could be automated. Finally, check with your auditor to make sure what you’ve implemented meets the requirements.

Security will always be a concern with development, and that concern only intensifies when processes speed up. As long as your teams work together, communicate clearly, know their places and expectations, and hold one another accountable, you can hasten the development process while keeping security fears at bay.


MCTS Training, MCITP Trainnig

Best Microsoft MCP Certification, Microsoft MCSE Training at

If Citrix is for sale, who will buy it?

The list of potential buyers may be short

If Citrix Systems is for sale, there is a short list of companies that would have reason to buy it.

Citrix’s products are widely used in corporate environments. Its revenue reached $3.4 billion last year, an 8% increase over the prior year.
the word start on a running track

Founded in 1989, the company has successfully fought off numerous competitive challenges along the way, and it maintains a strong user base. But investors are now pressuring Citrix, according to Reuters, to consider selling or unloading assets. These investors may lack the long view.

Citrix has thrived in a relatively narrow technology space, despite — in particular — Microsoft’s money and competing technology. Citrix has arguably stayed ahead in remote desktop technology, while nonetheless maintaining a good working relationship and partnership with Microsoft.

Dell is rumored to be interested in Citrix, but analysts say there are other firms that may have a strong incentive to buy it.

The most significant threat today to Citrix is the development of alternative methods to access information. The rise of mobile computing and cloud is changing this market.

As applications turn toward SaaS, they are also becoming more cross-platform, said David Johnson, an analyst at Forrester, “which means that they don’t have Windows dependencies anymore and will work through a browser and a wider range of operating systems natively.”

These alternative platforms “erode some of Citrix’s value proposition,” said Johnson. And while Citrix deals with these platform shifts, it still faces competition from Microsoft and VMware in desktops as well as server virtualization.

But analysts are not worried from a Citrix customer standpoint.

“They’ve still got a solid business,” said Charles King, an analyst at Pund-IT, of Citrix.

The two companies that are probably the best fit for Citrix are Dell and Hewlett-Packard. Both firms have very deep relationships with Microsoft and “are major players in markets where Citrix is a major entity,” said King.

He believes that Dell is more likely to show interest than HP, but only because HP is completing its split, separating its PC division from its enterprise products and services. It has a lot going on right now, he said.

Both Dell and HP, for instance, sell thin clients, devices designed to operate in virtual desktop environments. But two other firms that may be interested in Citrix, said Johnson, are Microsoft and VMware.


MCTS Training, MCITP Trainnig

Best Microsoft MCTS Certification, Microsoft MCITP Training at

Microsoft has built software, but not a Linux distribution, for its software switches

A Microsoft Linux distribution would be remarkable, but Redmond says it doesn’t have one.

Late last week, hell had apparently frozen over with the news that Microsoft had developed a Linux distribution of its own. The work was done as part of the company’s Azure cloud platform, which uses Linux-based network switches as part of its software-defined networking infrastructure.

While the software is real, Microsoft isn’t characterizing it as a Linux distribution, telling us that it’s an internal project. That’s an important distinction, and we suspect that we’re not going to see a Microsoft Linux any time soon.

The Open Compute Project (OCP), of which Microsoft is a member, is an industry group that is working together to define hardware and software standards for data center equipment. This includes designs for high-density compute nodes, storage, and networking equipment. One part that Microsoft has been working on is network hardware, in particular, software-defined networking (SDN). SDN adds a layer of software-based programmability, configuration, and centralized management to hardware that is traditionally awkward to manage. Traditional network switches, even managed ones, aren’t designed to enable new policies—alterations to quality-of-service or VLANs, say—to be deployed to hundreds or thousands of devices simultaneously. And to the extent that such capabilities are present, they vary from vendor to vendor.

Earlier this year, Microsoft, Dell, Mellanox, Facebook, Broadcom, and Intel contributed a specification, the Switch Abstraction Interface (SAI), that provides a common API that can span the wide range of ASICs (application-specific integrated circuits—chips tailored to handle very specific workloads, in this case, handling Ethernet frames) used in software-defined switch hardware. The SAI API is, in principle, cross-platform, defined for both Windows and Linux, but much of the switch hardware is supported best, or even exclusively, in Linux. A Linux distribution to support these applications, Open Network Linux, has even been developed.

The Azure Cloud Switch, which is what Microsoft announced on Friday, is Redmond’s software-defined switch. It builds on the SAI API to enable it to work with switch hardware from many different vendors; in August, an ACS switch using six different vendors’ switch ASICs was demonstrated. ACS is built on top of a Linux SAI implementation, and it uses Linux drivers for the switch ASICs.

Given Linux’s dominance in this area, it’s at once both surprising and unsurprising that ACS uses Linux. Unsurprising because there’s little practical alternative for this situation; surprising because Microsoft is still assumed to have some degree of hostility toward Linux. The company today would tell you that this hostility is a thing of the past. Microsoft would say today it’s willing to use the best tool for the job and work to ensure that its software is available on the platforms that people need it on. With this new, more pragmatic Microsoft, the use of Linux should be expected. And accordingly, Microsoft says that it is using this software in its own datacenters. Microsoft has publicly used non-Windows infrastructure before—some Skype infrastructure initially used Linux, for example, and Hotmail ran on FreeBSD when it was bought—but this is nonetheless unusual, as it’s new Microsoft development, not a bought-in project.

So why isn’t the company calling this new endeavor a distribution? The big reason is that the company doesn’t intend to distribute it. Again, it’s an internal development that showcases the OCP approach, but it isn’t a package that will be given to third parties.

Microsoft’s diagram describing ACS might also be significant; the Microsoft components are a set of applications and services that sit above SAI; that’s a chunk of software, but everything else could be taken from an off-the-shelf Linux distribution (Microsoft hasn’t specified). Another confounding factor could be the various switch ASIC components. Each vendor’s ASICs have their own drivers and SDKs, and at least some of these are not open source. This would make it difficult to build a Linux distribution around them.

As such, hell likely remains toasty and warm, and Microsoft won’t be in the Linux distribution business any time soon. But equally, it’s clearer than ever that this isn’t the Microsoft of the 2000s. If Linux is the best tool for the job, Microsoft is willing not only to use it, but to tell the world that it’s doing so.


MCTS Training, MCITP Trainnig

Best Microsoft MCTS Certification, Microsoft MCITP Training at


5 companies that impress with employee benefits

A healthy employee is a happy employee, and these five companies figured that out. These powerhouses offer employees impressive health and wellness benefits to keep stress down and productivity up.

How some companies strive to keep employees happy and healthy
Your office chair is killing you. Well, OK, sitting for eight hours a day at your desk job might not be killing you, but at the very least, it’s not good for your health. On top of that, we’re learning that the stress of our culture’s modern “always-on” lifestyles haven’t caught up with the caveman concerns of our past. Is it an email from your boss stressing you out or are you being chased by a lion? Your brain really can’t tell the difference, meaning many of us live in a constant state of fight or flight. And if you have a bad boss, you could even be at higher risk for heart disease, not to mention depression, sleep problems, anxiety and a number of other health issues.

That’s probably why companies are taking corporate wellness and benefits seriously, as more health concerns pop up over sedentary work and stressful environments. Here are five companies with corporate wellness programs and benefits aimed at keeping employees happy, healthy and most of all, productive.

Well-known as a progressive Internet company, Google has an impressive corporate wellness program. To start, the café supplies free breakfast, lunch and dinner for employees, with options ranging from sushi to fresh pressed juice. The Mountain View, Calif., office also has its own on-site physicians and nurses, so if you feel a cold coming on, you can get treated on site. Google also encourages its employees to continue learning by offering a reimbursement program for classes and degree programs. And employees seeking legal counsel can also get advice at no cost and even get legal services at a discount.

There are also shuttle buses, complete with Wi-Fi to take employees to and from work, as well as an electric-car share program, plug-in stations for electric vehicles and gBikes to get around campus. There’s more too, Google has on-site exercise facilities, extra time off for new parents, a rumored bowling alley as well as roof decks and unique office layouts.

Zappos’ decision to do away with bosses and adapt holacracy is a testament to the company’s dedication to staying unique in the corporate world. And that extends to the vast amount of benefits the company offers its employees. Starting with medical, employees get a free employee premium, free primary care and free generic prescriptions. Employees can take advantage of 24-hour telemedicine service, wellness coaches, infertility benefits, on-site health screenings and more.

Zappos’ Las Vegas office features an on-site fitness center with both in-person and virtual exercise classes. Employees can get nutritional advice, take weight management classes, get smoking cessation help, learn to reduce stress, take part in “wellness competitions,” get massages and much more right on campus. There is even a nap room with a “nap pod,” for employees that need to catch a few Z’s before getting back to work. Employees already dedicated to their fitness goals can even receive rewards and recognition from the company for their efforts.

In addition to full benefits like flexible work and time off, comprehensive benefits and travel benefits, just to name a few, employees at Cisco can get acupuncture, physical therapy and primary care right on-site. The company has its own on-site fitness center as well, where employees can get a workout in during the day. Cisco’s campus also has an outdoor sports club, organized sports leagues and hiking and biking trails for employees to use.

Its café focuses on providing fresh, seasonal and healthy food for workers, and there are also gourmet food trucks where employees can get their lunch. Teams also receive “fun-funds,” so they can celebrate and take part in team-building exercises outside of the office. For employees who want to give back, Cisco will donate $10 for every hour of volunteer work, up to $1,000, and will also match any cash donation, up to $1,000, to a nonprofit organization.

While Marissa Mayer might have cut back on working from home, a highly sought after perk, the company has a number of wellness benefits for employees. Employees can take fitness classes on-site including yoga, cardio-kickboxing, Pilates and even golf lessons. The cafeteria is open 24 hours a day, 7 days a week for those long work days and employees receive monthly food coupons to help subsidize the cost.

Both men and women get up to eight weeks of leave for the birth of a baby, adoption or foster child placement and new moms can take up to 16 weeks. Employees also get $500 a month for incidentals like groceries, laundry and even going out to eat. And anytime an employee gets to a five-year milestone, they can take up to eight weeks of unpaid leave.

One look at Apple’s page on Glassdoor, and its clear people like working for the company. With a whopping 5,500 reviews, the company maintains a 4.5 star rating, out of a possible 5 stars. Benefits kick in immediately for employees and even part-time workers in the Apple store get full benefits.

Some companies might keep employees stocked with soda and bagels, but Apple instead supplies its workers with, well, Apples. And every few weeks the company throws a “beer bash,” where employees can get together on the campus to mingle, listen to live music and drink free beer. Apple also helps with the strain of commuting to Cupertino by offering shuttles and stipends for those traveling by bus or train.


MCTS Training, MCITP Trainnig

Best Microsoft MCTS Certification, Microsoft MCITP Training at


Why (and how) VMware created a new type of virtualization just for containers

VMware says containers and virtual machines are better together

As the hype about containers has mounted over the past year, it has raised questions about what this technology – which is for packaging applications – means for traditional management and virtualization vendors. Some have wondered: Will containers kill the virtual machine?

VMware answered that question with a resounding no at its annual conference in San Francisco last week. But, company officials say containers can benefit from having a new type of management platform. And it’s built a whole new type of virtualization just for containers.
Virtualization for containers

A decade and a half ago, VMware helped revolutionized the technology industry with the introduction of enterprise-grade hypervisors that ushered in an era of server virtualization.

Last week the company revealed a redesigned version of its classic virtualization software named Project Photon. It’s a lightweight derivative of the company’s popular ESX hypervisor that has been engineered specifically to run application containers.

“At its core, it’s still got the virtualization base,” explains Kit Colbert, VMware’s vice president and CTO of Cloud Native Applications. Colbert calls Photon a “micro-visor” with “just enough” functionality to have the positive attributes of virtualization, while also being packaged in a lightweight format ideal for containers.

Project Photon includes two key pieces. One is named Photon Machine – a hypervisor software born out of ESX that is installed directly onto physical servers. It creates miniature virtual machines that containers are placed in. It includes a guest operating system, which the user can choose. By default Photon Machine comes with VMware’s customized Linux distribution named Photon OS, which the company has also designed to be container friendly.

The second major piece is named Photon Controller, which is a multi-tenant control plane that can handle many dozens, if not hundreds or thousands of instances of Photon Machine. Photon Controller will provision the clusters of Photon Machines and ensure they have access to network and storage resources as needed.

The combination of Photon Machine and Photon Controller creates a blueprint for a scale-out environment that has no single point of failure and exposes a single logical API endpoint that developers can write to. In theory, IT operators can deploy Project Photon and developers can write applications that run on it.

Project Photon will integrate with various open source projects, such as Docker for the container run-time support, as well as Google Kubernetes and Pivotil’s Cloud Foundry for higher-level application management. (Photon manages infrastructure provisioning while Kubernetes and CF manage application deployments.)

VMware’s virtual approach to containers (3:30)

VMware has not yet set pricing for either platform, but both will be available this year as a private beta.
The journey to containers

Not all customers are ready to go all-in on containers though. So, VMware is also integrating container support into its traditional management tools.

VSphere Integrated Containers is a second product VMware announced that Colbert says is a good starting point for organizations that want to get their feet wet with containers. For full-scale container build outs, Colbert recommends transitioning to Project Photon.

VSphere Integrated Containers is a plugin for vSphere, the company’s venerable ESX management software. “It makes containers first-class citizens in vSphere,” Colbert explains. With the plugin, customers are able to deploy containers inside of a virtual machine, allowing the container in the VM to be managed just like any other VM by vSphere.

By comparison, currently if a user wanted to deploy containers in vSphere, they would likely deploy multiple containers inside a single virtual machine. Colbert says that has potentially harmful security implications though: If one of the containers in the VM is compromised, then the other containers in the VM could be impacted. By packaging one container inside each VM, it allows containers to be protected by the security isolation and baked in management features of vSphere.

Kurt Marko, an analyst at Marko Insights, says VMware’s approach to containers could be appealing to VMware admins who are being pressured to embrace containers. It could come with a downside, though.

“Wrapping Photon containers in a micro-VM makes it look like any other instance to the management stack and operators,” Marko wrote in an email. “Of course, the potential downside is lost efficiency since even micro-VMs will have more overhead than containers sharing the same kernel and libraries.” VMware says the VM-overhead is minute, but Marko says it will take independent analysis to determine if there is a tax for using containers inside VMs.
Hold your horses

As VMware attempts to position itself as a container company, there are headwinds. First, it is still very early on in the container market.

“The hype far outweighs the utilization” at this point, says IDC analyst Al Gillen, program vice president for servers and systems software. He estimates that fewer than 1/10 of 1% of enterprise applications are currently running in containers. It could be more than a decade before the technology reaches mainstream adoption with more than 40% of the market.

VMware also hasn’t traditionally been known as a company that leads the charge when it comes to cutting edge open source projects, which is a perception the company is fighting. Sheng Liang, co-founder and CEO of Rancher Labs – a startup that was showcasing its container operating system and management platform at VMworld – said the container movement has thus far been driven largely by developers and open source platforms like Mesos, Docker and Kubernetes – he hasn’t run into a single container user who is running containers in VMware environments, he said.

Forrester analyst Dave Bartoltti says that shouldn’t be surprising though. VMware has strong relations with IT operations managers, not developers who have been most enthusiastically using containers. Announcements the company has made at VMworld are about enabling those IT ops workers to embrace containers in their VMware environments. Other management vendors, like Red Hat, Microsoft and IBM are equally enthusiastically embracing containers. VMware’s argument though, is that containers and VMs are better together.

MCTS Training, MCITP Trainnig

Best VmwareVCA-DCV Certification, VCAC510 Exams at


Are mainframes the answer to IT’s energy efficiency concerns?

Anyone who manages the technology demands of a large business faces plenty of exciting moments on the job, but I think it’s safe to say that calculating the energy costs of your company’s IT systems isn’t among those moments.

I was reminded of just how hard it is to factor energy efficiency into purchase and configuration decisions while reading some recent claims in the media around the cloud, and I remembered some simple but often overlooked ways mainframes solve tough energy efficiency dilemmas.

The Power of One
A device that can handle more data with fewer resources sounds like the definition of efficiency to me. No matter how much power it may have, a cluster of servers is still comprised of multiple devices, and every device involved in a clustered system multiplies issues of space, heat production, and power requirements. With up to 141 configurable processor units and 10TB of memory in a single machine, current mainframes offer comparable power to a large cluster of x86-based servers while saving floor space and energy output. That’s important for organizations that are looking to reduce their carbon or physical footprint or meet energy efficiency thresholds or capacity limits.

Limits of Capacity
One of the most energy-efficient aspects of mainframes is rooted in the system’s design. From their inception, mainframes have had some of the highest resource utilization rates of any hardware, often exceeding 95%. Many other systems are designed to run at 70% capacity or less in order to allow for system-originated maintenance, clean up, and checkpoints. If a hefty percentage of a system’s capacity is always busy processing self-generated tasks, then those throughput figures don’t really contribute to efficiency, do they?

When Less Is More
Think about a car engine. Not every cylinder is firing every time you press on the accelerator. If this were the case, the concept of fuel efficiency would be non-existent (and gas would likely be even more expensive.) Some engines even use a concept called variable displacement, which can dynamically shut off a cylinder or two to optimize energy production. Now, what type of computing device is most similar to a variable displacement engine? That would be the mainframe. The processing demands on any computer shift moment by moment, and mainframes are designed to easily shut down some processors when load is not present.

Computing the Cost
Too often, business environments demand short-term successes, which result in short-term decision-making. A classic example is considering the cost of acquisition rather than the cost of ownership in hardware and software. While one system may cost significantly less to buy and configure, there are significant costs that can pile up over six months – including electrical usage and heating/cooling. Figures from manufacturing promise significant savings over the lifetime of ownership. I’ve even heard of stories where due to power capacity limitations, like inside of the Washington D.C. beltway, the only computer resources that could be added were mainframes.

Using Hardware Well
In comparing the efficiency of computing systems, a vital question is often overlooked: How effectively does software utilize the hardware? We’ve all experienced problems with applications that run poorly on non-native systems. Whether or not a piece of software can perform as intended, as well as use all of the available processing power, can have a huge impact on efficiency. In the case of mainframes, the hardware/software match is often a best-case scenario. Applications and operating systems that were designed prior to recent leaps in memory, I/O and processing are able to take advantage of these advances without some of the inefficiencies that non-native hardware/software pairings can introduce. That has a direct effect on electrical usage and efficiency.

People Power
We’ve been focusing on the efficiency of processors and cooling systems, but what about the human factor? How system administrators use their time is an important part of the energy efficiency equation. Once again, mainframes make a difference. Multiple smaller systems take more time to manage than fewer large ones. This may seem at first like a small point, but, like other issues explored here, the long tail effect can be significant. Consider that multiple smaller systems can each have multiple differences in configuration and more. Multiple small issues have a nasty habit of turning into bigger ones.

It goes without saying that energy efficiency is essential to a company’s success. But I’ve witnessed too many situations where a drive for greater efficiency occurs without considering the longer view or subtle details. Those who do take a full look at their options, however, may be well served by the impact of Big Iron.


MCTS Training, MCITP Trainnig

Best Microsoft MCTS Certification, Microsoft MCITP Training at

Microsoft, U.S. face off again over emails stored in Ireland

The company has refused to turn over to the government the emails stored in Ireland

A dispute between Microsoft and the U.S. government over turning over emails stored in a data center in Ireland comes up for oral arguments in an appeals court in New York on Wednesday.

Microsoft holds that an outcome against it could affect the trust of its cloud customers abroad as well as affect relationships between the U.S. and other governments which have their own data protection and privacy laws.

Customers outside the U.S. would be concerned about extra-territorial access to their user information, the company has said. A decision against Microsoft could also establish a norm that could allow foreign governments to reach into computers in the U.S. of companies over which they assert jurisdiction, to seize the private correspondence of U.S. citizens.

The U.S. government has a warrant for access to emails held by Microsoft of a person involved in an investigation, but the company holds that nowhere did the U.S. Congress say that the Electronics Communications Privacy Act “should reach private emails stored on providers’ computers in foreign countries.”

It prefers that the government use “mutual legal assistance” treaties it has in place with other countries including Ireland. In an amicus curiae (friend of the court) brief filed in December in the U.S. Court of Appeals for the Second Circuit, Ireland said it “would be pleased to consider, as expeditiously as possible, a request under the treaty, should one be made.”

A number of technology companies, civil rights groups and computer scientists have filed briefs supporting Microsoft.

In a recent filing in the Second Circuit court, Microsoft said “Congress can and should grapple with the question whether, and when, law enforcement should be able to compel providers like Microsoft to help it seize customer emails stored in foreign countries.”

“We hope the U.S. government will work with Congress and with other governments to reform the laws, rather than simply seek to reinterpret them, which risks happening in this case,” Microsoft’s general counsel Brad Smith wrote in a post in April.

Lower courts have disagreed with Microsoft’s point of view. U.S. Magistrate Judge James C. Francis IV of the U.S. District Court for the Southern District of New York had in April last year refused to quash a warrant that authorized the search and seizure of information linked with a specific Web-based email account stored on Microsoft’s premises.

Microsoft complied with the search warrant by providing non-content information held on its U.S. servers but filed to quash the warrant after it concluded that the account was hosted in Dublin and the content was also stored there.

If the territorial restrictions on conventional warrants applied to warrants issued under section 2703 (a) of the Stored Communications Act, a part of the ECPA, the burden on the government would be substantial, and law enforcement efforts would be seriously impeded, the magistrate judge wrote in his order. The act covers required disclosure of wire or electronic communications in electronic storage.

While the company held that courts in the U.S. are not authorized to issue warrants for extraterritorial search and seizure, Judge Francis held that a warrant under the Stored Communications Act, was “a hybrid: part search warrant and part subpoena.” It is executed like a subpoena in that it is served on the Internet service provider who is required to provide the information from its servers wherever located, and does not involve government officials entering the premises, he noted.

Judge Loretta Preska of the District Court for the Southern District of New York rejected Microsoft’s appeal of the ruling, and the company thereafter appealed to the Second Circuit.



MCTS Training, MCITP Trainnig

Best Microsoft MCTS Certification, Microsoft MCITP Training at