Archive for the ‘Tech’ Category

Three key challenges in vulnerability risk management

This vendor-written tech primer has been edited by Network World to eliminate product promotion, but readers should note it will likely favor the submitter’s approach.

Vulnerability risk management has re-introduced itself as a top challenge – and priority – for even the most savvy IT organizations. Despite the best detection technologies, organizations continue to get compromised on a daily basis. Vulnerability scanning provides visibility into potential land mines across the network, but often just results in data tracked in spreadsheets and independent remediation teams scrambling in different directions.

The recent Verizon Data Breach report showed that 99.9% of vulnerabilities exploited in attacks were compromised more than a year after being published. This clearly demonstrates the need to change from a “find” to “fix” mentality. Here are three key challenges to getting there:

* Vulnerability prioritization. Today, many organizations prioritize based on CVSS score and perform some level of asset importance classification within the process. However, this is still generating too much data for remediation teams to take targeted and informed action. In a larger organization, this process can result in tens of thousands – or even millions – of critical vulnerabilities detected. So the bigger question is – which vulnerabilities are actually critical?

Additional context is necessary get a true picture of actual risk across the IT environment. Organizations might consider additional factors in threat prioritization, such as the exploitability or value of an asset, the correlation between the vulnerability and the availability of public exploits, attacks and malware actively targeting the detected vulnerability, or the popularity of a vulnerability in social media conversations.

* Remediation process. The second and perhaps most profound challenge is in the remediation process itself. On average, organizations take 103 days to remediate a security vulnerability. In a landscape of zero-day exploits and the speed and agility at which malware developers operate, the window of opportunity is wide open for attackers.

The remediation challenge is most often rooted in the process itself. While there is no technology that can easily and economically solve the problem, there are ways to enable better management through automation that can improve the process and influence user behavior. In some cases, there are simple adjustments that can result in a huge impact. For example, a CISO at a large enterprise company recently stated that something as easy as being able to establish deadlines and automated reminder notifications when a deadline was approaching could vastly improve the communication process between Security and DevOps/SysAdmin teams.

In other words, synchronizing communication between internal teams through workflow automation can help accelerate the remediation process. From simple ticket and task management to notifications and patch deployment, the ability to track the remediation process within a single unified view can eliminate the need to navigate and update multiple systems and potentially result in significant time savings.

* Program governance. The adage, “You can’t manage it if you can’t measure it” is true when it comes to evaluating the success of a vulnerability risk management program. In general, information security programs are hard to measure compared to other operational functions such as sales and engineering. One can create hard metrics, but it is often difficult to translate those metrics into measurable business value.

There is no definitive answer for declaring success. For most organizations, this will likely vary depending on the regulatory nature of their industry and overall risk management strategy. However, IT and security teams demonstrate greater value when they can show the level of risk removed from critical systems.

Establishing the right metrics is the key to any successful governance program, but it also must have the flexibility to evolve with the changing threat landscape. In the case of vulnerability risk management, governance may start with establishing baseline metrics such as number of days to patch critical systems or average ticket aging. As the program evolves, new, and more specific, metrics can be introduced such as number of days from discovery to resolution (i.e., time when a patch is available to actual application).

Practitioners can start improving the process by making some simple changes. For example, most vulnerability assessment tools offer standard prioritization of risks based on CVSS score and asset classification. However, this approach is still generating too much data for remediation teams. Some organizations have started to perform advanced correlation with threat intelligence feeds and exploit databases. Yet, this process can be a full-time job in itself, and is too taxing on resources.

Technologies exist today to help ease this process through automation by enriching the results of vulnerability scan data with rich context beyond the CVSS score. Through correlation with external threat, exploit, malware, and social media feeds and the IT environment, a list of prioritized vulnerabilities is delivered based on the systems most likely to be targeted in a data breach. Automating this part of the process with existing technologies can help cut the time spent on prioritization from days to hours.

Today, vulnerability management has become as much about people and process as it is about technology, and this is where many programs are failing. The problem is not detection. Prioritization, remediation, and program governance have become the new precedence. It is no longer a question of if you will be hacked, but rather when, and most importantly, how. The inevitable breach has become a commonly accepted reality. Vulnerability risk management calls for a new approach that moves beyond a simple exercise in patch management to one focused on risk reduction and tolerable incident response.

NopSec provides precision threat prediction and remediation workflow solutions to help businesses protect their IT environments from security breaches. Based on a flexible SaaS architecture, NopSec Unified VRM empowers security teams to better understand vulnerability data, assess the potential business impact, and reduce the time to remediation.

Click here to view complete Q&A of 70-355 exam

MCTS Training, MCITP Trainnig

Best Microsoft MCTS Certification, Microsoft 70-355 Training at


7 serious software update SNAFUs of the last 25 years

Microsoft’s Windows 10 eager early upgrade wasn’t the first software update gone way too wrong.

Microsoft’s Windows 10 eager early upgrade wasn’t the first software update gone way too wrong. Here are seven (more) serious software update SNAFUs.

AT&T Update Hangs Up LD Calls
In January 1990, AT&T hung up millions of LD calls after updating its 4ESS network switches in December. The company had coded a single-line error into the program’s recovery software. When a New York switch reset, the recovery software sent all the network hardware “crazy”.

TrendMicro marks Windows OS a virus
In September 2008, TrendMicro’s AV update tagged critical Microsoft Windows files as a virus, producing the dreaded Blue Screen of Death (BSOD). “I fixed some of those PCs while working at BestBuy. TrendMicro was our preferred AV software so a lot of clients were affected,” says Mike Garuccio, Garuccio Technical Services.

NT service pack packs a punch on PD
In September 2005, an LA area police department and an Alvaka Networks customer saw the chief and his lieutenants’ PCs crash. Updating HP desktop Windows NT 4.0 machines to Service Pack 6a caused the crashes. “It’s not pretty when the top brass at the PD cannot work,” says Oli Thordarson, CEO, Alvaka Networks.

Drivers drive admins nuts
In February 2000, Windows 2000 unleashed an updated hardware driver model that drove systems administrators nuts. “Printers, scanners, and peripherals stopped working regardless of Microsoft’s Windows Driver Model, which Microsoft lauded as a solution to migrating from Windows 98 to Windows 2000,” says Clay Calvert, director of CyberSecurity, MetroStar Systems.

AVG saddles Wintrust.dll with Trojan moniker
In March 2013, an updated AVG anti-virus program stopped trusting the benign Windows wintrust.dll file in Windows XP, marking it as a Trojan horse. Unwitting users who removed the file at the behest of AVG saw their PCs go kaput.

Microsoft Office 2000 update bug bite
In April 2003, the Microsoft Office 2000 SR-1 update spun out of control and into a continuous software registration request loop, asking customers to register their Microsoft Office 2000 product again, over and over and over.

Microsoft WGA finds its own software disingenuous
In August 2007, a newly updated Windows Genuine Advantage (WGA), which Microsoft created to seek, sort, and send Windows XP and Vista software pirates walking the proverbial plank instead identified many thousands of licensed copies of the popular OSs as unlicensed, informing innocent users of their digital high crimes against the software vendor and in the case of Vista, disabling numerous features.


MCTS Training, MCITP Trainnig

Best Microsoft MCTS Certification, Microsoft MCITP Training at


The 33 worst lines ever said by tech recruiters

Tech recruiters say the darndest things. How many of these cringeworthy pronouncements have you heard?

Everyone loves to talk about terrible pickup lines from the world of romance, but there’s a far worse kind of misguided enticement going on right here in the realm of technology. I’m talking about the delicate dance of tech recruitment — if you work in any area of IT, you probably know precisely what I mean.

The men and women tasked with recruiting tech talent go to great lengths to attract the attention of their targets — (often unsuspecting) tech pros viewed as valuable “gets.” While some recruiters prove to be invaluable in improving your career, finding exactly the right words to pique your interest in a new gig, far more seem to stammer, stumble, and elicit exasperated sighs.

You don’t have to take my word for it. Several brave tech workers from around the globe have taken the time to share their favorite (if you can call them that) tech recruiter horror tales, and we have highlights for you here.

From game- and/or world-changing concepts to oh-so-disruptive innovation, some tech recruiters resort to impressive-sounding catchphrases that don’t actually mean anything. It’s hard not to wonder how many of these cliché-inclined recruiters are relying on buzzword-generating algorithms to come up with their pitches. (Yes, such things do exist!)

I mean, really: For the love of vertical integration, can’t someone think out of the box?

1. “We need someone who is bright and passionate. Our product is one of a kind and slated to be a total game changer.”

2. “We are serious about changing the world.”

3. “We need people who think so far out of the box that the box isn’t even in the picture anymore.”

4. “This will give you great exposure to big data in the cloud, and you will be working with some extremely intelligent technologists!”

5. “As a company that specializes in innovation, [we] want the best and brightest creative visionaries.”

6. “I am working with the founders of a stealth mode startup disrupting the infrastructure/data center space.”

7. “We are working on absolutely amazing things and will scale tremendously.”

8. “We have a tight-knit dynamic team that is responsible for delivering consumer experiences.”

9. “We’re looking for a code ninja…”

10. “We’re looking for a Java wizard…”

11. “We’re looking for someone truly brilliant, and so we’re willing to offer a lot:

12. “Want to work with a team of diverse rock stars?”

13. “Would it help if I told you that I was helping out with ninja engineer hiring for Facebook?”

14. “Chuck Norris coding skills required.”

15. “EliteCoder you = new EliteCoder(“Can you code with the best?”);”

16. “The company is the first of its kind, as it is 100 percent focused on the integration of mobile/wearables and enterprise.”

17. “Think of it as meets LinkedIn meets Facebook with real privacy.”

18. “Our product combines many elements of Facebook, eBay, Blogger, PayPal, and Etsy.”


20. “We’re building a dynamic team that lives on the bleeding edge of technology with a unique opportunity to work on the Silverlight platform. Prior experience with Adobe Flash highly desired.”

21. “The best way to negotiate is not to negotiate at all … so tell me exactly what you’re making.”

22. “What we can offer:

23. “The environment is hip and modern, and very inclusive and friendly to women and other weirder types.”

24. “You have JavaScript on your resume. What do you mean you don’t know Java?”

25. “[This job requires] experience of developing databases in HTML.”

26. “[Looking for a] senior iOS architect with 10-plus years experience.” (The first version of iOS was released in 2007.)

27. “What’s the difference between a UI engineer and a Unix engineer?” (Posed to an IT employee by a recruiter hired to find engineers and programmers.)

28. “We are very impressed with all your Android work at [Company X] and we believe you would be a perfect fit for this great opportunity.” (Sent to someone after his first day as a Company X employee.)

29. “What a beautiful morning, what a beautiful day! … I am looking for people who don’t follow roads, the Docs of the world. The people who take life and grab it, regardless of any paths or roads that have been set. We spend a lifetime of thinking ‘what if’? But what if we spent a lifetime of ‘I did!'”

30. “You’ll be empowered to identify problems and dive head first into the equation. Risk is encouraged. Victory makes us who we are.”

31. “I don’t mean to be a nuisance, but there’s really no way of knowing if someone might be interested or not without a response.”

32. “I came across your profile and was very impressed by your pedigree.”

33. “Due to the high volume of applicants, only shortlisted candidates will be shortlisted.”

Well, that certainly clears things up. No word yet, however, on whether said shortlist could include any “women or other weirder types.”


Click here to view complete Q&A of 70-341 exam

MCTS Training, MCITP Trainnig

Best Microsoft MCTS Certification, Microsoft 70-341 Training at

11 cutting-edge databases worth exploring now

From document-graph hybrids to ‘unkillable’ clusters, the next generation of databases offers intrigue and innovation

Only a few years ago, database work was among the most boring of tasks in IT — in a good way. Data went into one of the major SQL databases and it came out later, all in one piece, exactly as it went in. The database creators had succeeded in delivering rock-solid performance, and everyone started taking it for granted.

Then the nature of what we wanted to store changed. Databases had to move beyond bank accounts and airline tickets because everyone had begun sharing data on social networks. Suddenly there was much more data to store, and most of this new data didn’t fit into the old tables. The work of database admins and creators transformed, and what has emerged is a wide array of intriguing solutions that help to make databases among the more intriguing technologies today.

Cassandra, MongoDB, CouchDB, Riak, Neo4j — the innovations of the past several years are by now well-established at many organizations. But a new generation is fast rising. Here we provide an overview of 11 cutting-edge databases tuned to store more data in more flexible formats on more machines in a way that can be queried in a variety of ways.

The database world has never been as varied and interesting as it is right now.

When a few refugees from Twitter wanted to build something new with the experience they gained processing billions of tweets, they decided that a distributed database was the right challenge. Enter FaunaDB. In goes the JSON, and out come answers from a distributed collection of nodes. FaunaDB’s query language offers the ability to ask complex questions that join together data from different nodes while searching through social networks and other graph structures in your databases.

If you’re simply interested in experimenting or you don’t want the hassle of rolling your own, FaunaDB comes in a cloud database-as-a-service version. When and if you want to take more control, you can install the enterprise version on your own iron.

You wouldn’t be the first architect to throw up your hands and say, “If only we could mix the flexibility of document-style databases with the special power of graph databases and still get the flexibility of tabular data. Then we would have it made.”

Believe it or not, a database aimed at satisfying those needs is already here. ArangoDB lets you stick data in documents or in a graph database. Then you can write queries that are really loops with joins that run on the database with all of the locality that makes processing those queries faster. Oh, and the query language is wrapped up in JavaScript that exposes microservices through a RESTful API. It’s a kitchen-sink approach that’s bound to make many people happy.

ArangoDB isn’t the only tool in town trying to mix the power of graph and document databases. OrientDB does something similar, but packages itself as a “second-generation graph database.” In other words, the nodes in the graphs are documents waiting for arbitrary key-value pairs.

This makes OrientDB feel like a graph database first, but there’s no reason you can’t use the key-value store alone. They also include a RESTful API waiting for your queries.
How many times have you found yourself wishing for the power of a search engine like Lucene but with the structure and querying ease of SQL? If the answer is more than zero, may be the answer.

While Lucene began as a search engine for finding keywords in large, unstructured blocks of text, it’s always offered to store keys and matching values in each document, allowing some to consider it part of the NoSQL revolution. started with Lucene and its larger, scalable, and distributed cousin Elasticsearch but added a query language with SQL syntax. The folks behind are also working on adding JOINs, which will make very powerful — assuming you need to use JOINs.

People who love the old-fashioned SQL way of thinking will enjoy the fact that bundles newer, scalable technology in a manner that’s easier for SQL-based systems to use.

The name might not be appealing, but the sentiment is. CockroachDB’s developers embraced the idea that no organism is as long-lasting or as resilient as the cockroach, bragging, “CockroachDB allows you to deploy applications with the strongest disaster recovery story in the industry.”

While time will tell whether they’ve truly achieved that goal, it won’t be for lack of engineering. The team’s plan is to make CockroachDB simple to scale. If you add a new node, CockroachDB will rebalance itself to use the new space. If you kill a node, it will shrink and replicate the data from the backup sources. To add extra security, CockroachDB promises fully serializable transactions that are across the entire cluster. You don’t need to worry about the data, which incidentally is stored as a “single, monolithic map from key to value where both keys and values are byte strings (not unicode).”

In a traditional database, you send a query and the database sends an answer. If you don’t send a query, the database doesn’t send you anything. It’s simple and perfect for some apps, but not for others.

RethinkDB inverts the old model and pushes data to clients. If the query answer changes, RethinkDB sends the new data to the client. It’s ideal for some of the new interactive apps that are coming along that help multiple people edit documents or work on presentations at the same time. Changes from one user are saved to RethinkDB, which promptly sends them off to the other users. The data is stored in JSON documents, which is ideal for Web apps.

Some databases want to store all of the information in the world. InfluxDB merely wants the time-series data, the numbers that come in an endless stream. They might be log files from a website or sensor readings from an experiment, but they keep coming and want to be analyzed.

InfluxDB offers a basic HTTP(s) API for adding data. For querying, it has an SQL-like syntax that includes some basic statistical operators like MEAN. Thus, you can ask for the average of a particular value over time and it will compute the answer inside the database without sending all of the data back to you. This makes building time-series websites easy and efficient.


Clustrix may not be a new product anymore — it’s up to Version 6.0 — but it still calls itself part of the NewSQL revolution because it offers automatic replication and clustering with much of the speed of an in-memory database. The folks behind Clustrix have added plenty of management tools to ensure the cluster can manage itself without too much attention from a database administrator.

Perhaps it makes more sense to see the version number as a sign of maturity and experience. You get all of the fun of new ideas with the assurance that comes only from years of testing.


If you have data to spread around the world in a distributed network of databases, NuoDB is ready to store it for you with all the concurrency control and transaction durability you need. The core is a “durable distributed cache” that absorbs your queries and eventually pushes the data into a persistent disk. All interactions with the cache can be done with ACID transaction semantics — if you desire. The commit protocol can be adjusted to trade off speed for durability.

The software package includes a wide variety of management tools for tracking the nodes in the system. All queries use an SQL-like syntax.


Some databases store information. VoltDB is designed to analyze it at the same time, offering “streaming analytics” that “deliver decisions in milliseconds.” The data arrives in JSON or SQL, then stored and analyzed in the same process, which incidentally is integrated with Hadoop to simplify elaborate computation. Oh, it also offers ACID transactional guarantees to the storage.


RAM has never been cheaper — or faster — and MemSQL is ready to make it easy to keep all of your data in RAM so that queries can be answered faster than ever. It’s like a smart cache, but can also replicate itself across a cluster. Once the data is in RAM, it’s also easy to analyze with built-in analytics.

The latest version also supports geospatial data for both storage and analysis. It’s easy to create geo-aware mobile apps that produce analytical results as the apps move around the world.


MCTS Training, MCITP Trainnig

Best Microsoft MCP Certification, Microsoft MCSE Training at

As containers take off, so do security concerns

Containers offer a quick and easy way to package up applications but security is becoming a real concern

Containers offer a quick and easy way to package up applications and all their dependencies, and are popular with testing and development.

According to a recent survey sponsored by container data management company Cluster HQ, 73 percent of enterprises are currently using containers for development and testing, but only 39 percent are using them in a production environment.

But this is changing, with 65 percent saying that they plan to use containers in production in the next 12 months, and cited security as their biggest worry. According to the survey, just over 60 percent said that security was either a major or a moderate barrier to adoption.
MORE ON CSO: The things end users do that drive security teams crazy

Containers can be run within virtual machines or on traditional servers. The idea is somewhat similar to that of a virtual machine itself, except that while a virtual machine includes a full copy of the operating system, a container does not, making them faster and easier to load up.

The downside is that containers are less isolated from one another than virtual machines are. In addition, because containers are an easy way to package and distribute applications, many are doing just that — but not all the containers available on the web can be trusted, and not all libraries and components included in those containers are patched and up-to-date.

According to a recent Red Hat survey, 67 percent of organizations plan to begin using containers in production environments over the next two years, but 60 percent said that they were concerned about security issues.
Isolated, but not isolated enough

Although containers are not as completely isolated from one another as virtual machines, they are more secure than just running applications by themselves.

“Your application is really more secure when it’s running inside a Docker container,” said Nathan McCauley, director of security at Docker, which currently dominates the container market.
MORE ON NETWORK WORLD: 12 Free Cloud Storage options

According to the Cluster HQ survey, 92 percent of organizations are using or considering Docker containers, followed by LXC at 32 percent and Rocket at 21 percent.

Since the technology was first launched, McCauley said, Docker containers have had built-in security features such as the ability to limit what an application can do inside a container. For example, companies can set up read-only containers.

Containers also use name spaces by default, he said, which prevent applications from being able to see other containers on the same machine.

“You can’t attack something else because you don’t even know it exists,” he said. “You can even get a handle on another process on the machine, because you don’t even know it’s there.”

White Paper
Buying into Mobile Security
White Paper
How secure is your email? Prevent Phishing & Protect Your Customers Post Data Breach

See All

However, container isolation doesn’t go far enough, said Simon Crosby, co-founder and CTO at security vendor Bromium.

“Containers do not make a promise of providing resilient, multi-tenant isolation,” he said. “It is possible for malicious code to escape from a container to attack the operation system or the other containers on the machine.”

If a company isn’t looking to get maximum efficiency out of its containers, however, it can run just one container per virtual machine.

This is the case with Nashua, NH-based Pneuron, which uses containers to distribute its business application building blocks to customers.

“We wanted to have assigned resourcing in a virtual machine to be usable by a specific container, rather than having two containers fight for a shared set of resources,” said Tom Fountain, the company’s CTO. “We think it’s simpler at the administrative level.”

Plus, this gives the application a second layer of security, he said.

“The ability to configure a particular virtual machine will provide a layer of insulation and security,” he said. “Then when we’re deployed inside that virtual machine then there’s one layer of security that’s put around the container, and then within our own container we have additional layers of security as well.”

But the typical use case is multiple containers inside a single machine, according to a survey of IT professionals released Wednesday by container security vendor Twistlock.

Only 15 percent of organizations run one container per virtual machine. The majority of the respondents, 62 percent, said that their companies run multiple containers on a single virtual machine, and 28 percent run containers on bare metal.

And the isolation issue is still not figured out, said Josh Bressers, security product manager at Red Hat.

“Every container is sharing the same kernel,” he said. “So if someone can leverage a security flaw to get inside the kernel, they can get into all the other containers running that kernel. But I’m confident we will solve it at some point.”

Bressers recommended that when companies think about container security, they apply the same principles as they would apply to a naked, non-containerized application — not the principles they would apply to a virtual machine.

“Some people think that containers are more secure than they are,” he said.
Vulnerable images

McCauley said that Docker is also working to address another security issue related to containers — that of untrusted content.

According to BanyanOps, a container technology company currently in private beta, more than 30 percent of containers distributed in the official repositories have high priority security vulnerabilities such as Shellshock and Heartbleed.

Outside the official repositories, that number jumps to about 40 percent.

Of the images created this year and distributed in the official repositories, 74 percent had high or medium priority vulnerabilities.

“In other words, three out of every four images created this year have vulnerabilities that are relatively easy to exploit with a potentially high impact,” wrote founder Yoshio Turner in the report.

In August, Docker announced the release of the Docker Content Trust, a new feature in the container engine that makes it possible to verify the publisher of

“It provides cryptographic guarantees and really leapfrogs all other secure software distribution mechanisms,” Docker’s McCauley said. “It provides a solid basis for the content you pull down, so that you know that it came from the folks you expect it to come from.”

Red Hat, for example, which has its own container repository, signs its containers, said Red Hat’s Bressers.

“We say, this container came from Red Hat, we know what’s in it, and it’s been updated appropriately,” he said. “People think they can just download random containers off the Internet and run them. That’s not smart. If you’re running untrusted containers, you can get yourself in trouble. And even if it’s a trusted container, make sure you have security updates installed.”

According to Docker’s McCauley, existing security tools should be able to work on containers the same way as they do on regular applications, and also recommended that companies deploy Linux security best practices.

Earlier this year Docker, in partnership with the Center for Information Security, published a detailed security benchmark best practices document, and a tool called Docker Bench that checks host machines against these recommendations and generates a status report.

However, for production deployment, organizations need tools that they can use that are similar to the management and security tools that already exist for virtualization, said Eric Chiu, president and co-founder at virtualization security vendor HyTrust.

“Role-based access controls, audit-quality logging and monitoring, encryption of data, hardening of the containers — all these are going to be required,” he said.

In addition, container technology makes it difficult to see what’s going on, experts say, and legacy systems can’t cut it.

“Lack of visibility into containers can mean that it is harder to observe and manage what is happening inside of them,” said Loris Degioanni, CEO at Sysdig, one of the new vendors offering container management tools.

Another new vendor in this space is Twistlock, which came out of stealth mode in May.

“Once your developers start to run containers, IT and IT security suddenly becomes blind to a lot of things that happen,” said Chenxi Wang, the company’s chief strategy officer.

Say, for example, you want to run anti-virus software. According to Wang, it won’t run inside the container itself, and if it’s running outside the container, on the virtual machine, it can’t see into the container.

Twistlock provides tools that can add security at multiple points. It can scan a company’s repository of containers, it can scan containers just as they are loaded and prevent vulnerable containers from launching.

“For example, if the application inside the container is allowed to run as root, we can say that it’s a violation of policy and stop it from running,” she said.

Twistlock can monitor whether a container is communicating with known command-and-control hosts and either report it, cut off the communication channel, or shut down the container altogether.

And the company also monitors communications between the container and the underlying Docker infrastructure, to detect applications that are trying to issue privileged commands or otherwise tunnel out of the container.

Market outlook

According to IDC analyst Gary Chen, container technology is still new that most companies are still figuring out what value they offer and how they’re going to use them.

“Today, it’s not really a big market,” he said. “It’s still really early in the game. Security is something you need once you start to put containers into operations.”

That will change once containers get more widely deployed.

“I wouldn’t be surprised if the big guys eventually got into this marketplace,” he said.

More than 800 million containers have been downloaded so far by tens of thousands of enterprises, according to Docker.

But it’s hard to calculate the dollar value of this market, said Joerg Fritsch, research director for security and risk management at research firm Gartner.

“Docker has not yet found a way to monetize their software,” he said, and there are very few other vendors offering services in this space. He estimates the market size to be around $200 million or $300 million, much of it from just a single services vendor, Odin, formerly the service provider part of virtualization company Parallels.

With the exception of Odin, most of the vendors in this space, including Docker itself, are relatively new startups, he said, and there are few commercial management and security tools available for enterprise customers.

“When you buy from startups you always have this business risk, that a startup will change its identity on the way,” Firtsch said.


MCTS Training, MCITP Trainnig

Best Microsoft MCP Certification, Microsoft MCSE Training at

How to get security right when embracing rapid software development

Five steps to reduce risk while moving to continuous updates

Accelerated software development brings with it particular advantages and disadvantages. On one hand, it increases the speed to market and allows for fast, frequent code releases, which trump slow, carefully planned ones that unleash a torrent of features at once. Continuous release cycles also allow teams to fine-tune software. With continuous updates, customers don’t have to wait for big releases that could take weeks or months.

Embracing failure without blame is also a key tenet of rapid acceleration. Teams grow faster this way, and management should embrace this culture change. Those who contribute to accidents can give detailed accounts of what happened without fear of repercussion, providing valuable learning opportunities for all involved.

However, when things are moving as quickly as rapid acceleration allows, outages, security vulnerabilities and bugs become bigger concerns. Mistakes can occur, potentially leading to security problems. The upside: Automation of tasks can actually reduce mistakes and thus remove potential security issues.

When development is rushed without security awareness, wrong software, unencrypted apps, or insecure apps could be installed; audits and compliances could fail; intellectual property or private customer data may be leaked. Security is essential to the success of any development project — make it a priority.

How to Accelerate Safely
Minimize security concerns associated with rapid acceleration by talking to all stakeholders involved. Everyone needs to be brought into the discussion. Members of the development team, along with operations and security, should analyze the existing system and vocalize their visions for the new one prior to closing gaps with tools, automation and testing.

To implement a rapid approach to software development while reducing the potential risks, consider these five steps:

* Automate everything. Your team must take time to identify bottlenecks (the delivery process, infrastructure, testing, etc.) and find methods to automate anything that doesn’t need to be completed manually.

Consider establishing a system for continuous deployment. This allows automatic deployment of every software update to production and delivery. Continuous integration should also be a priority so changes and code added to the pipeline are automatically isolated, tested, and reported on before automation tools integrate code into the code base. Automation not only reduces waste in the process, but it also produces a repeatable process and outcome, which are squarely in the wheelhouse of security’s desires.

* Be agile but not unrealistic. Instead of spending an exorbitant amount of time on planning, flesh out the requirements and begin the process. Start by designating people to stay ahead of development, keep the project on track, and ensure deliverables are completed on schedule. Through it all, keep operations — and your company — transparent.

If someone runs in with a high-priority request, the project manager or product owner can say, “No, we can’t finish that in this sprint, but we can add it to the backlog with a high-priority mark and work it into an upcoming sprint.” Agile programming is a pull model, not a push model. Management needs to understand how this works and support it.

If the sprint’s allocated stories are completed early, more work can then be pulled in. That said, don’t let others push unplanned work on the team. Agile programming requires team agreement to complete a specific amount of work in a specific time frame.

* Work across departments. When departments move together rapidly, tensions will inevitably rise. Security should be brought into the fold so these issues don’t cause speed bumps. Sales teams, marketing teams, or teams invested in the end product need to have an equal seat at the table. Planning should be a collaborative effort among all stakeholders.

* Separate duties and systems. Often, as companies attempt to embrace rapid acceleration, a need for separation of duties may arise as just one of many compliance requirements. Only select employees should have access to production and test systems.

* Work as a team. Ensure everyone understands the company’s compliance and controls requirements. Be creative to ensure requirements are met without creating speed bumps. Also, consider how controls could be automated. Finally, check with your auditor to make sure what you’ve implemented meets the requirements.

Security will always be a concern with development, and that concern only intensifies when processes speed up. As long as your teams work together, communicate clearly, know their places and expectations, and hold one another accountable, you can hasten the development process while keeping security fears at bay.


MCTS Training, MCITP Trainnig

Best Microsoft MCP Certification, Microsoft MCSE Training at

5 companies that impress with employee benefits

A healthy employee is a happy employee, and these five companies figured that out. These powerhouses offer employees impressive health and wellness benefits to keep stress down and productivity up.

How some companies strive to keep employees happy and healthy
Your office chair is killing you. Well, OK, sitting for eight hours a day at your desk job might not be killing you, but at the very least, it’s not good for your health. On top of that, we’re learning that the stress of our culture’s modern “always-on” lifestyles haven’t caught up with the caveman concerns of our past. Is it an email from your boss stressing you out or are you being chased by a lion? Your brain really can’t tell the difference, meaning many of us live in a constant state of fight or flight. And if you have a bad boss, you could even be at higher risk for heart disease, not to mention depression, sleep problems, anxiety and a number of other health issues.

That’s probably why companies are taking corporate wellness and benefits seriously, as more health concerns pop up over sedentary work and stressful environments. Here are five companies with corporate wellness programs and benefits aimed at keeping employees happy, healthy and most of all, productive.

Well-known as a progressive Internet company, Google has an impressive corporate wellness program. To start, the café supplies free breakfast, lunch and dinner for employees, with options ranging from sushi to fresh pressed juice. The Mountain View, Calif., office also has its own on-site physicians and nurses, so if you feel a cold coming on, you can get treated on site. Google also encourages its employees to continue learning by offering a reimbursement program for classes and degree programs. And employees seeking legal counsel can also get advice at no cost and even get legal services at a discount.

There are also shuttle buses, complete with Wi-Fi to take employees to and from work, as well as an electric-car share program, plug-in stations for electric vehicles and gBikes to get around campus. There’s more too, Google has on-site exercise facilities, extra time off for new parents, a rumored bowling alley as well as roof decks and unique office layouts.

Zappos’ decision to do away with bosses and adapt holacracy is a testament to the company’s dedication to staying unique in the corporate world. And that extends to the vast amount of benefits the company offers its employees. Starting with medical, employees get a free employee premium, free primary care and free generic prescriptions. Employees can take advantage of 24-hour telemedicine service, wellness coaches, infertility benefits, on-site health screenings and more.

Zappos’ Las Vegas office features an on-site fitness center with both in-person and virtual exercise classes. Employees can get nutritional advice, take weight management classes, get smoking cessation help, learn to reduce stress, take part in “wellness competitions,” get massages and much more right on campus. There is even a nap room with a “nap pod,” for employees that need to catch a few Z’s before getting back to work. Employees already dedicated to their fitness goals can even receive rewards and recognition from the company for their efforts.

In addition to full benefits like flexible work and time off, comprehensive benefits and travel benefits, just to name a few, employees at Cisco can get acupuncture, physical therapy and primary care right on-site. The company has its own on-site fitness center as well, where employees can get a workout in during the day. Cisco’s campus also has an outdoor sports club, organized sports leagues and hiking and biking trails for employees to use.

Its café focuses on providing fresh, seasonal and healthy food for workers, and there are also gourmet food trucks where employees can get their lunch. Teams also receive “fun-funds,” so they can celebrate and take part in team-building exercises outside of the office. For employees who want to give back, Cisco will donate $10 for every hour of volunteer work, up to $1,000, and will also match any cash donation, up to $1,000, to a nonprofit organization.

While Marissa Mayer might have cut back on working from home, a highly sought after perk, the company has a number of wellness benefits for employees. Employees can take fitness classes on-site including yoga, cardio-kickboxing, Pilates and even golf lessons. The cafeteria is open 24 hours a day, 7 days a week for those long work days and employees receive monthly food coupons to help subsidize the cost.

Both men and women get up to eight weeks of leave for the birth of a baby, adoption or foster child placement and new moms can take up to 16 weeks. Employees also get $500 a month for incidentals like groceries, laundry and even going out to eat. And anytime an employee gets to a five-year milestone, they can take up to eight weeks of unpaid leave.

One look at Apple’s page on Glassdoor, and its clear people like working for the company. With a whopping 5,500 reviews, the company maintains a 4.5 star rating, out of a possible 5 stars. Benefits kick in immediately for employees and even part-time workers in the Apple store get full benefits.

Some companies might keep employees stocked with soda and bagels, but Apple instead supplies its workers with, well, Apples. And every few weeks the company throws a “beer bash,” where employees can get together on the campus to mingle, listen to live music and drink free beer. Apple also helps with the strain of commuting to Cupertino by offering shuttles and stipends for those traveling by bus or train.


MCTS Training, MCITP Trainnig

Best Microsoft MCTS Certification, Microsoft MCITP Training at


Are mainframes the answer to IT’s energy efficiency concerns?

Anyone who manages the technology demands of a large business faces plenty of exciting moments on the job, but I think it’s safe to say that calculating the energy costs of your company’s IT systems isn’t among those moments.

I was reminded of just how hard it is to factor energy efficiency into purchase and configuration decisions while reading some recent claims in the media around the cloud, and I remembered some simple but often overlooked ways mainframes solve tough energy efficiency dilemmas.

The Power of One
A device that can handle more data with fewer resources sounds like the definition of efficiency to me. No matter how much power it may have, a cluster of servers is still comprised of multiple devices, and every device involved in a clustered system multiplies issues of space, heat production, and power requirements. With up to 141 configurable processor units and 10TB of memory in a single machine, current mainframes offer comparable power to a large cluster of x86-based servers while saving floor space and energy output. That’s important for organizations that are looking to reduce their carbon or physical footprint or meet energy efficiency thresholds or capacity limits.

Limits of Capacity
One of the most energy-efficient aspects of mainframes is rooted in the system’s design. From their inception, mainframes have had some of the highest resource utilization rates of any hardware, often exceeding 95%. Many other systems are designed to run at 70% capacity or less in order to allow for system-originated maintenance, clean up, and checkpoints. If a hefty percentage of a system’s capacity is always busy processing self-generated tasks, then those throughput figures don’t really contribute to efficiency, do they?

When Less Is More
Think about a car engine. Not every cylinder is firing every time you press on the accelerator. If this were the case, the concept of fuel efficiency would be non-existent (and gas would likely be even more expensive.) Some engines even use a concept called variable displacement, which can dynamically shut off a cylinder or two to optimize energy production. Now, what type of computing device is most similar to a variable displacement engine? That would be the mainframe. The processing demands on any computer shift moment by moment, and mainframes are designed to easily shut down some processors when load is not present.

Computing the Cost
Too often, business environments demand short-term successes, which result in short-term decision-making. A classic example is considering the cost of acquisition rather than the cost of ownership in hardware and software. While one system may cost significantly less to buy and configure, there are significant costs that can pile up over six months – including electrical usage and heating/cooling. Figures from manufacturing promise significant savings over the lifetime of ownership. I’ve even heard of stories where due to power capacity limitations, like inside of the Washington D.C. beltway, the only computer resources that could be added were mainframes.

Using Hardware Well
In comparing the efficiency of computing systems, a vital question is often overlooked: How effectively does software utilize the hardware? We’ve all experienced problems with applications that run poorly on non-native systems. Whether or not a piece of software can perform as intended, as well as use all of the available processing power, can have a huge impact on efficiency. In the case of mainframes, the hardware/software match is often a best-case scenario. Applications and operating systems that were designed prior to recent leaps in memory, I/O and processing are able to take advantage of these advances without some of the inefficiencies that non-native hardware/software pairings can introduce. That has a direct effect on electrical usage and efficiency.

People Power
We’ve been focusing on the efficiency of processors and cooling systems, but what about the human factor? How system administrators use their time is an important part of the energy efficiency equation. Once again, mainframes make a difference. Multiple smaller systems take more time to manage than fewer large ones. This may seem at first like a small point, but, like other issues explored here, the long tail effect can be significant. Consider that multiple smaller systems can each have multiple differences in configuration and more. Multiple small issues have a nasty habit of turning into bigger ones.

It goes without saying that energy efficiency is essential to a company’s success. But I’ve witnessed too many situations where a drive for greater efficiency occurs without considering the longer view or subtle details. Those who do take a full look at their options, however, may be well served by the impact of Big Iron.


MCTS Training, MCITP Trainnig

Best Microsoft MCTS Certification, Microsoft MCITP Training at

10 security technologies destined for the dustbin

Systemic flaws and a rapidly shifting threatscape spell doom for many of today’s trusted security technologies

Perhaps nothing, not even the weather, changes as fast as computer technology. With that brisk pace of progress comes a grave responsibility: securing it.

Every wave of new tech, no matter how small or esoteric, brings with it new threats. The security community slaves to keep up and, all things considered, does a pretty good job against hackers, who shift technologies and methodologies rapidly, leaving last year’s well-recognized attacks to the dustbin.

Have you had to enable the write-protect notch on your floppy disk lately to prevent boot viruses or malicious overwriting? Have you had to turn off your modem to prevent hackers from dialing it at night? Have you had to unload your ansi.sys driver to prevent malicious text files from remapping your keyboard to make your next keystroke reformat your hard drive? Did you review your autoexec.bat and config.sys files to make sure no malicious entries were inserted to autostart malware?

Not so much these days — hackers have moved on, and the technology made to prevent older hacks like these is no longer top of mind. Sometimes we defenders have done such a good job that the attackers decided to move on to more fruitful options. Sometimes a particular defensive feature gets removed because the good guys determined it didn’t protect that well in the first place or had unexpected weaknesses.

If you, like me, have been in the computer security world long enough, you’ve seen a lot of security tech come and go. It’s almost to the point where you can start to predict what will stick and be improved and what will sooner or later become obsolete. The pace of change in attacks and technology alike mean that even so-called cutting-edge defenses, like biometric authentication and advanced firewalls, will eventually fail and go away. Surveying today’s defense technologies, here’s what I think is destined for the history books.

Biometric authentication is tantalizing cure-all for log-on security. After all, using your face, fingerprint, DNA, or some other biometric marker seems like the perfect log-on credential — to someone who doesn’t specialize in log-on authentication. As far as those experts are concerned, it’s not so much that biometric methods are rarely as accurate as most people think; it’s more that, once stolen, your biometric markers can’t be changed.

Take your fingerprints. Most people have only 10. Anytime your fingerprints are used as a biometric logon, those fingerprints — or, more accurately, the digital representations of those fingerprints — must be stored for future log-on comparison. Unfortunately, log-on credentials are far too often compromised or stolen. If the bad guy steals the digital representation of your fingerprints, how could any system tell the difference between your real fingerprints and their previously accepted digital representations?

In that case, the only solution might be to tell every system in the world that might rely on your fingerprints to not rely on your fingerprints, if that were even possible. The same is true for any other biometric marker. You’ll have a hard time repudiating your real DNA, face, retina scan, and so on if a bad player gets their hands on the digital representation of those biometric markers.

That doesn’t even take into account issues around systems that only allow you to logon if you use, say, your fingerprint when you can no longer reliably use your fingerprint. What then?

Biometric markers used in conjunction with a secret only you know (password, PIN, and so on) are one way to defeat hackers that have your biometric logon marker. Of course mental secrets can be captured as well, as happens often with nonbiometric two-factor log-on credentials like smartcards and USB key fobs. In those instances, admins can easily issue you a new physical factor and you can pick a new PIN or password. That isn’t the case when one of the factors is your body.

While biometric logons are fast becoming a trendy security feature, there’s a reason they aren’t — and won’t ever be — ubiquitous. Once people realize that biometric logons aren’t what they pretend to be, they will lose popularity and either disappear, always require a second form of authentication, or only be used when high-assurance identification is not needed.

Doomed security technology No. 2: SSL

Secure Socket Layer was invented by long-gone Netscape in 1995. For two decades it served us adequately. But if you haven’t heard, it is irrevocably broken and can’t be repaired, thanks to the Poodle attack. SSL’s replacement, TLS (Transport Layer Security), is slightly better. Of all the doomed security tech discussed in this article, SSL is the closest to be being replaced, as it should no longer be used.

The problem? Hundreds of thousands of websites rely on or allow SSL. If you disable all SSL — a common default in the latest versions of popular browsers — all sorts of websites don’t work. Or they will work, but only because the browser or application accepts “downleveling” to SSL. If it’s not websites and browsers, then it’s the millions of old SSH servers out there.

OpenSSH is seemingly constantly being hacked these days. While it’s true that about half of OpenSSH hacks have nothing to do with SSL, SSL vulnerabilities account for the other half. Millions of SSH/OpenSSH sites still use SSL even though they shouldn’t.

Worse, terminology among tech pros is contributing to the problem, as nearly everyone in the computer security industry calls TLS digital certificates “SSL certs” though they don’t use SSL. It’s like calling a copy machine a Xerox when it’s not that brand. If we’re going to hasten the world off SSL, we need to start calling TLS certs “TLS certs.

Make a vow today: Don’t use SSL ever, and call Web server certs TLS certs. That’s what they are or should be. The sooner we get rid of the word “SSL,” the sooner it will be relegated to history’s dustbin.

Doomed security technology No. 3: Public key encryption

This may surprise some people, but most of the public key encryption we use today — RSA, Diffie-Hellman, and so on — is predicted to be readable as soon as quantum computing and cryptography are figured out. Many, including this author, have been long (and incorrectly) predicting that usable quantum computing was mere years away. But when researchers finally get it working, most known public encryption ciphers, including the popular ones, will be readily broken. Spy agencies around the world have been saving encrypted secrets for years waiting for the big breakthrough — or, if you believe some rumors, they already have solved the problem and are reading all our secrets.

Some crypto experts, like Bruce Schneier, have long been dubious about the promise of quantum cryptography. But even the critics can’t dismiss the likelihood that, once it’s figured out, any secret encrypted by RSA, Diffie-Hellman, and even ECC are immediately readable.

That’s not to say there aren’t quantum-resistant cipher algorithms. There are a few, including lattice-based cryptography and Supersingular Isogeny Key Exchange. But if your public cipher isn’t one of those, you’re out of luck if and when quantum computing becomes widespread.

Doomed security technology No. 4: IPsec
When enabled, IPsec allows all network traffic between two or more points to be cryptographically protected for packet integrity and privacy, aka encrypted. Invented in 1993 and made an open standard in 1995, IPsec is widely supported by hundreds of vendors and used on millions of enterprise computers.

Unlike most of the doomed security defenses discussed in this article, IPsec works and works great. But its problems are two-fold.

First, although widely used and deployed, it has never reached the critical mass necessary to keep it in use for much longer. Plus, IPsec is complex and isn’t supported by all vendors. Worse, it can often be defeated by only one device in between the source and destination that does not support it — such as a gateway or load balancer. At many companies, the number of computers that get IPsec exceptions is greater than the number of computers forced to use it.

IPsec’s complexity also creates performance issues. When enabled, it can significantly slow down every connection using it, unless you deploy specialized IPsec-enabled hardware on both sides of the tunnel. Thus, high-volume transaction servers such as databases and most Web servers simply can’t afford to employ it. And those two types of servers are precisely where most important data resides. If you can’t protect most data, what good is it?

Plus, despite being a “common” open standard, IPsec implementations don’t typically work between vendors, another factor that has slowed down or prevented widespread adoption of IPsec.

But the death knell for IPsec is the ubiquity of HTTPS. When you have HTTPS enabled, you don’t need IPsec. It’s an either/or decision, and the world has spoken. HTTPS has won. As long as you have a valid TLS digital certificate and a compatible client, it works: no interoperability problems, low complexity. There is some performance impact, but it’s not noticeable to most users. The world is quickly becoming a default world of HTTPS. As that progresses, IPsec dies.

Doomed security technology No. 5: Firewalls

The ubiquity of HTTPS essentially spells the doom of the traditional firewall. I wrote about this in 2012, creating a mini-firestorm that won me invites to speak at conferences all over the world.

Some people would say I was wrong. Three years later, firewalls are still everywhere. True, but most aren’t configured and almost all don’t have the “least permissive, block-by-default” rules that make a firewall valuable in the first place. Most firewalls I come across have overly permissive rules. I often see “Allow All ANY ANY” rules, which essentially means the firewall is worse than useless. It’s doing nothing but slowing down network connections.

Anyway you define a firewall, it must include some portion that allows only specific, predefined ports in order to be useful. As the world moves to HTTPS-only network connections, all firewalls will eventually have only a few rules — HTTP/HTTPS and maybe DNS. Other protocols, such ads DNS, DHCP, and so on, will likely start using HTTPS-only too. In fact, I can’t imagine a future that doesn’t end up HTTPS-only. When that happens, what of the firewall?

The main protection firewalls offer is to secure against a remote attack on a vulnerable service. Remotely vulnerable services, usually exploited by one-touch, remotely exploitable buffer overflows, used to be among the most common attacks. Look at the Robert Morris Internet worm, Code Red, Blaster, and SQL Slammer. But when’s the last time you heard of a global, fast-acting buffer overflow worm? Probably not since the early 2000s, and none of those were as bad as the worms from the 1980s and 1990s. Essentially, if you don’t have an unpatched, vulnerable listening service, then you don’t need a traditional firewall — and right now you don’t. Yep, you heard me right. You don’t need a firewall.

Firewall vendors often write to tell me that their “advanced” firewall has features beyond the traditional firewall that makes theirs worth buying. Well, I’ve been waiting for more than two decades for “advanced firewalls” to save the day. It turns out they don’t. If they perform “deep packet inspection” or signature scanning, it either slows down network traffic too much, is rife with false positives, or scans for only a small subset of attacks. Most “advanced” firewalls scan for a few dozen to a few hundred attacks. These days, more than 390,000 new malware programs are registered every day, not including all the hacker attacks that are indistinguishable from legitimate activity.

Even when firewalls do a perfect job at preventing what they say they prevent, they don’t really work, given that they don’t stop the two biggest malicious attacks most organizations face on a daily basis: unpatched software and social engineering.

Put it this way: Every customer and person I know currently running a firewall is as hacked as someone who doesn’t. I don’t fault firewalls. Perhaps they worked so well back in the day that hackers moved on to other sorts of attacks. For whatever reason, firewalls are nearly useless today and have been trending in that direction for more than a decade.

Doomed security technology No. 6: Antivirus scanners

Depending on whose statistics you believe, malware programs currently number in the tens to hundreds of millions — an overwhelming fact that has rendered antivirus scanners nearly useless.

Not entirely useless, because they stop 80 to 99.9 percent of attacks against the average user. But the average user is exposed to hundreds of malicious programs every year; even with the best odds, the bad guy wins every once in a while. If you keep your PC free from malware for more than a year, you’ve done something special.

That isn’t to say we shouldn’t applaud antivirus vendors. They’ve done a tremendous job against astronomical odds. I can’t think of any sector that has had to adjust to the kinds of overwhelming progressive numbers and advances in technology since the late 1980s, when there were only a few dozen viruses to detect.

But what will really kill antivirus scanners isn’t this glut of malware. It’s whitelisting. Right now the average computer will run any program you install. That’s why malware is everywhere. But computer and operating system manufacturers are beginning to reset the “run anything” paradigm for the safety of their customers — a movement that is antithetical to antivirus programs, which allow everything to run unimpeded except for programs that contain one of the more than 500 million known antivirus signatures. “Run by default, block by exception” is giving way to “block by default, allow by exception.”

Of course, computers have long had whitelisting programs, aka application control programs. I reviewed some of the more popular products back in 2009. The problem: Most people don’t use whitelisting, even when it’s built in. The biggest roadblock? The fear of what users will do if they can’t install everything they want willy-nilly or the big management headache of having to approve every program that can be run on a user’s system.

But malware and hackers are getting more pervasive and worse, and vendors are responding by enabling whitelisting by default. Apple’s OS X introduced a near version of default whitelisting three years ago with Gatekeeper. iOS devices have had near-whitelisting for much longer in that they can run only approved applications from the App Store (unless the device is jailbroken). Some malicious programs have slipped by Apple, but the process has been incredibly successful at stopping the huge influx that normally follows popular OSes and programs.

Microsoft has long had a similar mechanism, through Software Restriction Policies and AppLocker, but an even stronger push is coming in Windows 10 with DeviceGuard. Microsoft’s Windows Store also offers the same protections as Apple’s App Store. While Microsoft won’t be enabling DeviceGuard or Windows Store-only applications by default, the features are there and are easier to use than before.

Once whitelisting becomes the default on most popular operating systems, it’s game over for malware and, subsequently, for antivirus scanners. I can’t say I’ll miss either.

Doomed security technology No. 7: Antispam filters

Spam still makes up more than half of the Internet’s email. You might not notice this anymore, thanks to antispam filters, which have reached levels of accuracy that antivirus vendors can only claim to deliver. Yet spammers keep spitting out billions of unwanted messages each day. In the end, only two things will ever stop them: universal, pervasive, high-assurance authentication and more cohesive international laws.

Spammers still exist mainly because we can’t easily catch them. But as the Internet matures, pervasive anonymity will be replaced by pervasive high-assurance identities. At that point, when someone sends you a message claiming to have a bag of money to mail you, you will be assured they are who they say they are.

High-assurance identities can only be established when all users are required to adopt two-factor (or higher) authentication to verify their identity, followed by identity-assured computers and networks. Every cog in between the sender and the receiver will have a higher level of reliability. Part of that reliability will be provided by pervasive HTTPS (discussed above), but it will ultimately require additional mechanisms at every stage of authentication to assure that when I say I’m someone, I really am that someone.

Today, almost anyone can claim to be anyone else, and there’s no universal way to verify that person’s claim. This will change. Almost every other critical infrastructure we rely on — transportation, power, and so on — requires this assurance. The Internet may be the Wild West right now, but the increasingly essential nature of the Internet as infrastructure virtually ensures that it will eventually move in the direction of identity assurance.

Meanwhile, the international border problem that permeates nearly every online-criminal prosecution is likely to be resolved in the near future. Right now, many major countries do not accept evidence or warrants issued by other countries, which makes arresting spammers (and other malicious actors) nearly impossible. You can collect all the evidence you like, but if the attacker’s home country won’t enforce the warrant, your case is toast.

As the Internet matures, however, countries that don’t help ferret out the Internet’s biggest criminals will be penalized. They may be placed on a blacklist. In fact, some already are. For example, many companies and websites reject all traffic originating from China, whether it’s legitimate or not. Once we can identify criminals and their home countries beyond repudiation, as outlined above, those home countries will be forced to respond or suffer penalties.

The heyday of the spammers where most of their crap reached your inbox is already over. Pervasive identities and international law changes will close the coffin lid on spam — and the security tech necessary to combat it.

Doomed security technology No. 8: Anti-DoS protections

Thankfully, the same pervasive identity protections mentioned above will be the death knell for denial-of-service (DoS) attacks and the technologies that have arisen to quell them.

These days, anyone can launch free Internet tools to overwhelm websites with billions of packets. Most operating systems have built-in anti-DoS attack protections, and more than a dozen vendors can protect your websites even when being hit by extraordinary amounts of bogus traffic. But the loss of pervasive anonymity will stop all malicious senders of DoS traffic. Once we can identify them, we can arrest them.

Think of it this way: Back in the 1920s there were a lot of rich and famous bank robbers. Banks finally beefed up their protection, and cops got better at identifying and arresting them. Robbers still hit banks, but they rarely get rich, and they almost always get caught, especially when they persist in robbing more banks. The same will happen to DoS senders. As soon as we can quickly identify them, the sooner they will disappear as the bothersome elements of society that they are.

Doomed security technology No. 9: Huge event logs

Computer security event monitoring and alerting is difficult. Every computer is easily capable of generating tens of thousands of events on its own each day. Collect them to a centralized logging database and pretty soon you’re talking petabytes of needed storage. Today’s event log management systems are often lauded for the vast size of their disk storage arrays.

The only problem: This sort of event logging doesn’t work. When nearly every collected event packet is worthless and goes unread, and the cumulative effect of all the worthless unread events is a huge storage cost, something has to give. Soon enough admins will require application and operating system vendors to give them more signal and less noise, by passing along useful events without the mundane log clutter. In other words, event log vendors will soon be bragging about how little space they take rather than how much.

Doomed security technology No. 10: Anonymity tools (not to mention anonymity and privacy)

Lastly, any mistaken vestige of anonymity and privacy will be completely wiped away. We already really don’t have it. The best book I can recommend on the subject is Bruce Schneier’s “Data and Goliath.” A quick read will scare you to death if you didn’t already realize how little privacy and anonymity you truly have.

Even hackers who think that hiding on Tor and other “darknets” give them some semblance of anonymity must understand how quickly the cops are arresting people doing bad things on those networks. Anonymous kingpin after anonymous kingpin ends up being arrested, identified in court, and serving real jail sentences with real jail numbers attached to their real identity.

The truth is, anonymity tools don’t work. Many companies, and certainly law enforcement, already know who you are. The only difference is that, in the future, everyone will know the score and stop pretending they are staying hidden and anonymous online.

I would love for a consumer’s bill of rights guaranteeing privacy to be created and passed, but past experience teaches me that too many citizens are more than willing to give up their right to privacy in return for supposed protection. How do I know? Because it’s already the standard everywhere but the Internet. You can bet the Internet is next.


MCTS Training, MCITP Trainnig

Best Microsoft MCTS Certification, Microsoft MCITP Training at

Is the cloud the right spot for your big data?

Is the cloud a good spot for big data?

That’s a controversial question, and the answer changes depending on who you ask.

Last week I attended the HP Big Data Conference in Boston and both an HP customer and an executive told me that big data isn’t a good fit for the public cloud.

CB Bohn is a senior database engineer at Etsy, and a user of HP’s Vertica database. The online marketplace uses the public cloud for some workloads, but its primary functions are run out of a co-location center, Bohn said. It doesn’t make sense for the company to lift and shift its Postgres, Vertica SQL and Hadoop workloads into the public cloud, he said. It would be a massive undertaking for the company to port all the data associated with those programs into the cloud. Then, once its transferred to the cloud, the company would have to pay ongoing costs to store it there. Meanwhile, the company has a co-lo facility already set up and expertise in house to manage the infrastructure required to run those programs. The cloud just isn’t a good fit for Etsy’s big data, Bohn says.

Chris Selland, VP of Business Development at HP’s Big Data software division, says most of the company’s customers aren’t using the cloud in a substantial way with big data. Perhaps that’s because HP’s big data cloud, named Helion, isn’t quite as mature as say Amazon Web Services or Microsoft Azure. But still, Selland said there are both technical challenges (like data portability, and data latency) along with non-technical reasons, such as company executives being more comfortable with the data not being the cloud.

Bohn isn’t totally against the cloud though. For quick, large processing jobs the cloud is great. “Spikey” workloads that need fast access to large amounts of compute resources are ideal for the cloud. But, if an organization has a constant need for compute and storage resources, it can be more efficient to buy commodity hardware and run it yourself.

Public cloud vendors like Amazon Web Services make the opposite argument. I asked CTO Werner Vogels about private clouds recently and he argued that businesses should not waste time on building out data center infrastructure when AWS can supply it to them. Bohn argues that it’s cheaper to just buy the equipment than to rent it over the long-term.

As the public cloud has matured, it’s clear there’s still a debate about what workloads the cloud is good for and which it’s not.

The real answer to this question is that it depends on the business. For startup companies who were born in the cloud and have all their data in the cloud, it will make sense to do your data processing in the cloud. For companies that have big data center footprints, or co-location infrastructure set up, then there may not be a reason to lift and shift to the cloud. Each business will have its own specific use cases, some of which may be good for the cloud, and others which may not be.


MCTS Training, MCITP Trainnig

Best Microsoft MCTS Certification, Microsoft MCITP Training at