Archive for the ‘Tech’ Category

11 cutting-edge databases worth exploring now

From document-graph hybrids to ‘unkillable’ clusters, the next generation of databases offers intrigue and innovation

Only a few years ago, database work was among the most boring of tasks in IT — in a good way. Data went into one of the major SQL databases and it came out later, all in one piece, exactly as it went in. The database creators had succeeded in delivering rock-solid performance, and everyone started taking it for granted.

Then the nature of what we wanted to store changed. Databases had to move beyond bank accounts and airline tickets because everyone had begun sharing data on social networks. Suddenly there was much more data to store, and most of this new data didn’t fit into the old tables. The work of database admins and creators transformed, and what has emerged is a wide array of intriguing solutions that help to make databases among the more intriguing technologies today.

Cassandra, MongoDB, CouchDB, Riak, Neo4j — the innovations of the past several years are by now well-established at many organizations. But a new generation is fast rising. Here we provide an overview of 11 cutting-edge databases tuned to store more data in more flexible formats on more machines in a way that can be queried in a variety of ways.

The database world has never been as varied and interesting as it is right now.

When a few refugees from Twitter wanted to build something new with the experience they gained processing billions of tweets, they decided that a distributed database was the right challenge. Enter FaunaDB. In goes the JSON, and out come answers from a distributed collection of nodes. FaunaDB’s query language offers the ability to ask complex questions that join together data from different nodes while searching through social networks and other graph structures in your databases.

If you’re simply interested in experimenting or you don’t want the hassle of rolling your own, FaunaDB comes in a cloud database-as-a-service version. When and if you want to take more control, you can install the enterprise version on your own iron.

You wouldn’t be the first architect to throw up your hands and say, “If only we could mix the flexibility of document-style databases with the special power of graph databases and still get the flexibility of tabular data. Then we would have it made.”

Believe it or not, a database aimed at satisfying those needs is already here. ArangoDB lets you stick data in documents or in a graph database. Then you can write queries that are really loops with joins that run on the database with all of the locality that makes processing those queries faster. Oh, and the query language is wrapped up in JavaScript that exposes microservices through a RESTful API. It’s a kitchen-sink approach that’s bound to make many people happy.

ArangoDB isn’t the only tool in town trying to mix the power of graph and document databases. OrientDB does something similar, but packages itself as a “second-generation graph database.” In other words, the nodes in the graphs are documents waiting for arbitrary key-value pairs.

This makes OrientDB feel like a graph database first, but there’s no reason you can’t use the key-value store alone. They also include a RESTful API waiting for your queries.
How many times have you found yourself wishing for the power of a search engine like Lucene but with the structure and querying ease of SQL? If the answer is more than zero, may be the answer.

While Lucene began as a search engine for finding keywords in large, unstructured blocks of text, it’s always offered to store keys and matching values in each document, allowing some to consider it part of the NoSQL revolution. started with Lucene and its larger, scalable, and distributed cousin Elasticsearch but added a query language with SQL syntax. The folks behind are also working on adding JOINs, which will make very powerful — assuming you need to use JOINs.

People who love the old-fashioned SQL way of thinking will enjoy the fact that bundles newer, scalable technology in a manner that’s easier for SQL-based systems to use.

The name might not be appealing, but the sentiment is. CockroachDB’s developers embraced the idea that no organism is as long-lasting or as resilient as the cockroach, bragging, “CockroachDB allows you to deploy applications with the strongest disaster recovery story in the industry.”

While time will tell whether they’ve truly achieved that goal, it won’t be for lack of engineering. The team’s plan is to make CockroachDB simple to scale. If you add a new node, CockroachDB will rebalance itself to use the new space. If you kill a node, it will shrink and replicate the data from the backup sources. To add extra security, CockroachDB promises fully serializable transactions that are across the entire cluster. You don’t need to worry about the data, which incidentally is stored as a “single, monolithic map from key to value where both keys and values are byte strings (not unicode).”

In a traditional database, you send a query and the database sends an answer. If you don’t send a query, the database doesn’t send you anything. It’s simple and perfect for some apps, but not for others.

RethinkDB inverts the old model and pushes data to clients. If the query answer changes, RethinkDB sends the new data to the client. It’s ideal for some of the new interactive apps that are coming along that help multiple people edit documents or work on presentations at the same time. Changes from one user are saved to RethinkDB, which promptly sends them off to the other users. The data is stored in JSON documents, which is ideal for Web apps.

Some databases want to store all of the information in the world. InfluxDB merely wants the time-series data, the numbers that come in an endless stream. They might be log files from a website or sensor readings from an experiment, but they keep coming and want to be analyzed.

InfluxDB offers a basic HTTP(s) API for adding data. For querying, it has an SQL-like syntax that includes some basic statistical operators like MEAN. Thus, you can ask for the average of a particular value over time and it will compute the answer inside the database without sending all of the data back to you. This makes building time-series websites easy and efficient.


Clustrix may not be a new product anymore — it’s up to Version 6.0 — but it still calls itself part of the NewSQL revolution because it offers automatic replication and clustering with much of the speed of an in-memory database. The folks behind Clustrix have added plenty of management tools to ensure the cluster can manage itself without too much attention from a database administrator.

Perhaps it makes more sense to see the version number as a sign of maturity and experience. You get all of the fun of new ideas with the assurance that comes only from years of testing.


If you have data to spread around the world in a distributed network of databases, NuoDB is ready to store it for you with all the concurrency control and transaction durability you need. The core is a “durable distributed cache” that absorbs your queries and eventually pushes the data into a persistent disk. All interactions with the cache can be done with ACID transaction semantics — if you desire. The commit protocol can be adjusted to trade off speed for durability.

The software package includes a wide variety of management tools for tracking the nodes in the system. All queries use an SQL-like syntax.


Some databases store information. VoltDB is designed to analyze it at the same time, offering “streaming analytics” that “deliver decisions in milliseconds.” The data arrives in JSON or SQL, then stored and analyzed in the same process, which incidentally is integrated with Hadoop to simplify elaborate computation. Oh, it also offers ACID transactional guarantees to the storage.


RAM has never been cheaper — or faster — and MemSQL is ready to make it easy to keep all of your data in RAM so that queries can be answered faster than ever. It’s like a smart cache, but can also replicate itself across a cluster. Once the data is in RAM, it’s also easy to analyze with built-in analytics.

The latest version also supports geospatial data for both storage and analysis. It’s easy to create geo-aware mobile apps that produce analytical results as the apps move around the world.


MCTS Training, MCITP Trainnig

Best Microsoft MCP Certification, Microsoft MCSE Training at

As containers take off, so do security concerns

Containers offer a quick and easy way to package up applications but security is becoming a real concern

Containers offer a quick and easy way to package up applications and all their dependencies, and are popular with testing and development.

According to a recent survey sponsored by container data management company Cluster HQ, 73 percent of enterprises are currently using containers for development and testing, but only 39 percent are using them in a production environment.

But this is changing, with 65 percent saying that they plan to use containers in production in the next 12 months, and cited security as their biggest worry. According to the survey, just over 60 percent said that security was either a major or a moderate barrier to adoption.
MORE ON CSO: The things end users do that drive security teams crazy

Containers can be run within virtual machines or on traditional servers. The idea is somewhat similar to that of a virtual machine itself, except that while a virtual machine includes a full copy of the operating system, a container does not, making them faster and easier to load up.

The downside is that containers are less isolated from one another than virtual machines are. In addition, because containers are an easy way to package and distribute applications, many are doing just that — but not all the containers available on the web can be trusted, and not all libraries and components included in those containers are patched and up-to-date.

According to a recent Red Hat survey, 67 percent of organizations plan to begin using containers in production environments over the next two years, but 60 percent said that they were concerned about security issues.
Isolated, but not isolated enough

Although containers are not as completely isolated from one another as virtual machines, they are more secure than just running applications by themselves.

“Your application is really more secure when it’s running inside a Docker container,” said Nathan McCauley, director of security at Docker, which currently dominates the container market.
MORE ON NETWORK WORLD: 12 Free Cloud Storage options

According to the Cluster HQ survey, 92 percent of organizations are using or considering Docker containers, followed by LXC at 32 percent and Rocket at 21 percent.

Since the technology was first launched, McCauley said, Docker containers have had built-in security features such as the ability to limit what an application can do inside a container. For example, companies can set up read-only containers.

Containers also use name spaces by default, he said, which prevent applications from being able to see other containers on the same machine.

“You can’t attack something else because you don’t even know it exists,” he said. “You can even get a handle on another process on the machine, because you don’t even know it’s there.”

White Paper
Buying into Mobile Security
White Paper
How secure is your email? Prevent Phishing & Protect Your Customers Post Data Breach

See All

However, container isolation doesn’t go far enough, said Simon Crosby, co-founder and CTO at security vendor Bromium.

“Containers do not make a promise of providing resilient, multi-tenant isolation,” he said. “It is possible for malicious code to escape from a container to attack the operation system or the other containers on the machine.”

If a company isn’t looking to get maximum efficiency out of its containers, however, it can run just one container per virtual machine.

This is the case with Nashua, NH-based Pneuron, which uses containers to distribute its business application building blocks to customers.

“We wanted to have assigned resourcing in a virtual machine to be usable by a specific container, rather than having two containers fight for a shared set of resources,” said Tom Fountain, the company’s CTO. “We think it’s simpler at the administrative level.”

Plus, this gives the application a second layer of security, he said.

“The ability to configure a particular virtual machine will provide a layer of insulation and security,” he said. “Then when we’re deployed inside that virtual machine then there’s one layer of security that’s put around the container, and then within our own container we have additional layers of security as well.”

But the typical use case is multiple containers inside a single machine, according to a survey of IT professionals released Wednesday by container security vendor Twistlock.

Only 15 percent of organizations run one container per virtual machine. The majority of the respondents, 62 percent, said that their companies run multiple containers on a single virtual machine, and 28 percent run containers on bare metal.

And the isolation issue is still not figured out, said Josh Bressers, security product manager at Red Hat.

“Every container is sharing the same kernel,” he said. “So if someone can leverage a security flaw to get inside the kernel, they can get into all the other containers running that kernel. But I’m confident we will solve it at some point.”

Bressers recommended that when companies think about container security, they apply the same principles as they would apply to a naked, non-containerized application — not the principles they would apply to a virtual machine.

“Some people think that containers are more secure than they are,” he said.
Vulnerable images

McCauley said that Docker is also working to address another security issue related to containers — that of untrusted content.

According to BanyanOps, a container technology company currently in private beta, more than 30 percent of containers distributed in the official repositories have high priority security vulnerabilities such as Shellshock and Heartbleed.

Outside the official repositories, that number jumps to about 40 percent.

Of the images created this year and distributed in the official repositories, 74 percent had high or medium priority vulnerabilities.

“In other words, three out of every four images created this year have vulnerabilities that are relatively easy to exploit with a potentially high impact,” wrote founder Yoshio Turner in the report.

In August, Docker announced the release of the Docker Content Trust, a new feature in the container engine that makes it possible to verify the publisher of

“It provides cryptographic guarantees and really leapfrogs all other secure software distribution mechanisms,” Docker’s McCauley said. “It provides a solid basis for the content you pull down, so that you know that it came from the folks you expect it to come from.”

Red Hat, for example, which has its own container repository, signs its containers, said Red Hat’s Bressers.

“We say, this container came from Red Hat, we know what’s in it, and it’s been updated appropriately,” he said. “People think they can just download random containers off the Internet and run them. That’s not smart. If you’re running untrusted containers, you can get yourself in trouble. And even if it’s a trusted container, make sure you have security updates installed.”

According to Docker’s McCauley, existing security tools should be able to work on containers the same way as they do on regular applications, and also recommended that companies deploy Linux security best practices.

Earlier this year Docker, in partnership with the Center for Information Security, published a detailed security benchmark best practices document, and a tool called Docker Bench that checks host machines against these recommendations and generates a status report.

However, for production deployment, organizations need tools that they can use that are similar to the management and security tools that already exist for virtualization, said Eric Chiu, president and co-founder at virtualization security vendor HyTrust.

“Role-based access controls, audit-quality logging and monitoring, encryption of data, hardening of the containers — all these are going to be required,” he said.

In addition, container technology makes it difficult to see what’s going on, experts say, and legacy systems can’t cut it.

“Lack of visibility into containers can mean that it is harder to observe and manage what is happening inside of them,” said Loris Degioanni, CEO at Sysdig, one of the new vendors offering container management tools.

Another new vendor in this space is Twistlock, which came out of stealth mode in May.

“Once your developers start to run containers, IT and IT security suddenly becomes blind to a lot of things that happen,” said Chenxi Wang, the company’s chief strategy officer.

Say, for example, you want to run anti-virus software. According to Wang, it won’t run inside the container itself, and if it’s running outside the container, on the virtual machine, it can’t see into the container.

Twistlock provides tools that can add security at multiple points. It can scan a company’s repository of containers, it can scan containers just as they are loaded and prevent vulnerable containers from launching.

“For example, if the application inside the container is allowed to run as root, we can say that it’s a violation of policy and stop it from running,” she said.

Twistlock can monitor whether a container is communicating with known command-and-control hosts and either report it, cut off the communication channel, or shut down the container altogether.

And the company also monitors communications between the container and the underlying Docker infrastructure, to detect applications that are trying to issue privileged commands or otherwise tunnel out of the container.

Market outlook

According to IDC analyst Gary Chen, container technology is still new that most companies are still figuring out what value they offer and how they’re going to use them.

“Today, it’s not really a big market,” he said. “It’s still really early in the game. Security is something you need once you start to put containers into operations.”

That will change once containers get more widely deployed.

“I wouldn’t be surprised if the big guys eventually got into this marketplace,” he said.

More than 800 million containers have been downloaded so far by tens of thousands of enterprises, according to Docker.

But it’s hard to calculate the dollar value of this market, said Joerg Fritsch, research director for security and risk management at research firm Gartner.

“Docker has not yet found a way to monetize their software,” he said, and there are very few other vendors offering services in this space. He estimates the market size to be around $200 million or $300 million, much of it from just a single services vendor, Odin, formerly the service provider part of virtualization company Parallels.

With the exception of Odin, most of the vendors in this space, including Docker itself, are relatively new startups, he said, and there are few commercial management and security tools available for enterprise customers.

“When you buy from startups you always have this business risk, that a startup will change its identity on the way,” Firtsch said.


MCTS Training, MCITP Trainnig

Best Microsoft MCP Certification, Microsoft MCSE Training at

How to get security right when embracing rapid software development

Five steps to reduce risk while moving to continuous updates

Accelerated software development brings with it particular advantages and disadvantages. On one hand, it increases the speed to market and allows for fast, frequent code releases, which trump slow, carefully planned ones that unleash a torrent of features at once. Continuous release cycles also allow teams to fine-tune software. With continuous updates, customers don’t have to wait for big releases that could take weeks or months.

Embracing failure without blame is also a key tenet of rapid acceleration. Teams grow faster this way, and management should embrace this culture change. Those who contribute to accidents can give detailed accounts of what happened without fear of repercussion, providing valuable learning opportunities for all involved.

However, when things are moving as quickly as rapid acceleration allows, outages, security vulnerabilities and bugs become bigger concerns. Mistakes can occur, potentially leading to security problems. The upside: Automation of tasks can actually reduce mistakes and thus remove potential security issues.

When development is rushed without security awareness, wrong software, unencrypted apps, or insecure apps could be installed; audits and compliances could fail; intellectual property or private customer data may be leaked. Security is essential to the success of any development project — make it a priority.

How to Accelerate Safely
Minimize security concerns associated with rapid acceleration by talking to all stakeholders involved. Everyone needs to be brought into the discussion. Members of the development team, along with operations and security, should analyze the existing system and vocalize their visions for the new one prior to closing gaps with tools, automation and testing.

To implement a rapid approach to software development while reducing the potential risks, consider these five steps:

* Automate everything. Your team must take time to identify bottlenecks (the delivery process, infrastructure, testing, etc.) and find methods to automate anything that doesn’t need to be completed manually.

Consider establishing a system for continuous deployment. This allows automatic deployment of every software update to production and delivery. Continuous integration should also be a priority so changes and code added to the pipeline are automatically isolated, tested, and reported on before automation tools integrate code into the code base. Automation not only reduces waste in the process, but it also produces a repeatable process and outcome, which are squarely in the wheelhouse of security’s desires.

* Be agile but not unrealistic. Instead of spending an exorbitant amount of time on planning, flesh out the requirements and begin the process. Start by designating people to stay ahead of development, keep the project on track, and ensure deliverables are completed on schedule. Through it all, keep operations — and your company — transparent.

If someone runs in with a high-priority request, the project manager or product owner can say, “No, we can’t finish that in this sprint, but we can add it to the backlog with a high-priority mark and work it into an upcoming sprint.” Agile programming is a pull model, not a push model. Management needs to understand how this works and support it.

If the sprint’s allocated stories are completed early, more work can then be pulled in. That said, don’t let others push unplanned work on the team. Agile programming requires team agreement to complete a specific amount of work in a specific time frame.

* Work across departments. When departments move together rapidly, tensions will inevitably rise. Security should be brought into the fold so these issues don’t cause speed bumps. Sales teams, marketing teams, or teams invested in the end product need to have an equal seat at the table. Planning should be a collaborative effort among all stakeholders.

* Separate duties and systems. Often, as companies attempt to embrace rapid acceleration, a need for separation of duties may arise as just one of many compliance requirements. Only select employees should have access to production and test systems.

* Work as a team. Ensure everyone understands the company’s compliance and controls requirements. Be creative to ensure requirements are met without creating speed bumps. Also, consider how controls could be automated. Finally, check with your auditor to make sure what you’ve implemented meets the requirements.

Security will always be a concern with development, and that concern only intensifies when processes speed up. As long as your teams work together, communicate clearly, know their places and expectations, and hold one another accountable, you can hasten the development process while keeping security fears at bay.


MCTS Training, MCITP Trainnig

Best Microsoft MCP Certification, Microsoft MCSE Training at

5 companies that impress with employee benefits

A healthy employee is a happy employee, and these five companies figured that out. These powerhouses offer employees impressive health and wellness benefits to keep stress down and productivity up.

How some companies strive to keep employees happy and healthy
Your office chair is killing you. Well, OK, sitting for eight hours a day at your desk job might not be killing you, but at the very least, it’s not good for your health. On top of that, we’re learning that the stress of our culture’s modern “always-on” lifestyles haven’t caught up with the caveman concerns of our past. Is it an email from your boss stressing you out or are you being chased by a lion? Your brain really can’t tell the difference, meaning many of us live in a constant state of fight or flight. And if you have a bad boss, you could even be at higher risk for heart disease, not to mention depression, sleep problems, anxiety and a number of other health issues.

That’s probably why companies are taking corporate wellness and benefits seriously, as more health concerns pop up over sedentary work and stressful environments. Here are five companies with corporate wellness programs and benefits aimed at keeping employees happy, healthy and most of all, productive.

Well-known as a progressive Internet company, Google has an impressive corporate wellness program. To start, the café supplies free breakfast, lunch and dinner for employees, with options ranging from sushi to fresh pressed juice. The Mountain View, Calif., office also has its own on-site physicians and nurses, so if you feel a cold coming on, you can get treated on site. Google also encourages its employees to continue learning by offering a reimbursement program for classes and degree programs. And employees seeking legal counsel can also get advice at no cost and even get legal services at a discount.

There are also shuttle buses, complete with Wi-Fi to take employees to and from work, as well as an electric-car share program, plug-in stations for electric vehicles and gBikes to get around campus. There’s more too, Google has on-site exercise facilities, extra time off for new parents, a rumored bowling alley as well as roof decks and unique office layouts.

Zappos’ decision to do away with bosses and adapt holacracy is a testament to the company’s dedication to staying unique in the corporate world. And that extends to the vast amount of benefits the company offers its employees. Starting with medical, employees get a free employee premium, free primary care and free generic prescriptions. Employees can take advantage of 24-hour telemedicine service, wellness coaches, infertility benefits, on-site health screenings and more.

Zappos’ Las Vegas office features an on-site fitness center with both in-person and virtual exercise classes. Employees can get nutritional advice, take weight management classes, get smoking cessation help, learn to reduce stress, take part in “wellness competitions,” get massages and much more right on campus. There is even a nap room with a “nap pod,” for employees that need to catch a few Z’s before getting back to work. Employees already dedicated to their fitness goals can even receive rewards and recognition from the company for their efforts.

In addition to full benefits like flexible work and time off, comprehensive benefits and travel benefits, just to name a few, employees at Cisco can get acupuncture, physical therapy and primary care right on-site. The company has its own on-site fitness center as well, where employees can get a workout in during the day. Cisco’s campus also has an outdoor sports club, organized sports leagues and hiking and biking trails for employees to use.

Its café focuses on providing fresh, seasonal and healthy food for workers, and there are also gourmet food trucks where employees can get their lunch. Teams also receive “fun-funds,” so they can celebrate and take part in team-building exercises outside of the office. For employees who want to give back, Cisco will donate $10 for every hour of volunteer work, up to $1,000, and will also match any cash donation, up to $1,000, to a nonprofit organization.

While Marissa Mayer might have cut back on working from home, a highly sought after perk, the company has a number of wellness benefits for employees. Employees can take fitness classes on-site including yoga, cardio-kickboxing, Pilates and even golf lessons. The cafeteria is open 24 hours a day, 7 days a week for those long work days and employees receive monthly food coupons to help subsidize the cost.

Both men and women get up to eight weeks of leave for the birth of a baby, adoption or foster child placement and new moms can take up to 16 weeks. Employees also get $500 a month for incidentals like groceries, laundry and even going out to eat. And anytime an employee gets to a five-year milestone, they can take up to eight weeks of unpaid leave.

One look at Apple’s page on Glassdoor, and its clear people like working for the company. With a whopping 5,500 reviews, the company maintains a 4.5 star rating, out of a possible 5 stars. Benefits kick in immediately for employees and even part-time workers in the Apple store get full benefits.

Some companies might keep employees stocked with soda and bagels, but Apple instead supplies its workers with, well, Apples. And every few weeks the company throws a “beer bash,” where employees can get together on the campus to mingle, listen to live music and drink free beer. Apple also helps with the strain of commuting to Cupertino by offering shuttles and stipends for those traveling by bus or train.


MCTS Training, MCITP Trainnig

Best Microsoft MCTS Certification, Microsoft MCITP Training at


Are mainframes the answer to IT’s energy efficiency concerns?

Anyone who manages the technology demands of a large business faces plenty of exciting moments on the job, but I think it’s safe to say that calculating the energy costs of your company’s IT systems isn’t among those moments.

I was reminded of just how hard it is to factor energy efficiency into purchase and configuration decisions while reading some recent claims in the media around the cloud, and I remembered some simple but often overlooked ways mainframes solve tough energy efficiency dilemmas.

The Power of One
A device that can handle more data with fewer resources sounds like the definition of efficiency to me. No matter how much power it may have, a cluster of servers is still comprised of multiple devices, and every device involved in a clustered system multiplies issues of space, heat production, and power requirements. With up to 141 configurable processor units and 10TB of memory in a single machine, current mainframes offer comparable power to a large cluster of x86-based servers while saving floor space and energy output. That’s important for organizations that are looking to reduce their carbon or physical footprint or meet energy efficiency thresholds or capacity limits.

Limits of Capacity
One of the most energy-efficient aspects of mainframes is rooted in the system’s design. From their inception, mainframes have had some of the highest resource utilization rates of any hardware, often exceeding 95%. Many other systems are designed to run at 70% capacity or less in order to allow for system-originated maintenance, clean up, and checkpoints. If a hefty percentage of a system’s capacity is always busy processing self-generated tasks, then those throughput figures don’t really contribute to efficiency, do they?

When Less Is More
Think about a car engine. Not every cylinder is firing every time you press on the accelerator. If this were the case, the concept of fuel efficiency would be non-existent (and gas would likely be even more expensive.) Some engines even use a concept called variable displacement, which can dynamically shut off a cylinder or two to optimize energy production. Now, what type of computing device is most similar to a variable displacement engine? That would be the mainframe. The processing demands on any computer shift moment by moment, and mainframes are designed to easily shut down some processors when load is not present.

Computing the Cost
Too often, business environments demand short-term successes, which result in short-term decision-making. A classic example is considering the cost of acquisition rather than the cost of ownership in hardware and software. While one system may cost significantly less to buy and configure, there are significant costs that can pile up over six months – including electrical usage and heating/cooling. Figures from manufacturing promise significant savings over the lifetime of ownership. I’ve even heard of stories where due to power capacity limitations, like inside of the Washington D.C. beltway, the only computer resources that could be added were mainframes.

Using Hardware Well
In comparing the efficiency of computing systems, a vital question is often overlooked: How effectively does software utilize the hardware? We’ve all experienced problems with applications that run poorly on non-native systems. Whether or not a piece of software can perform as intended, as well as use all of the available processing power, can have a huge impact on efficiency. In the case of mainframes, the hardware/software match is often a best-case scenario. Applications and operating systems that were designed prior to recent leaps in memory, I/O and processing are able to take advantage of these advances without some of the inefficiencies that non-native hardware/software pairings can introduce. That has a direct effect on electrical usage and efficiency.

People Power
We’ve been focusing on the efficiency of processors and cooling systems, but what about the human factor? How system administrators use their time is an important part of the energy efficiency equation. Once again, mainframes make a difference. Multiple smaller systems take more time to manage than fewer large ones. This may seem at first like a small point, but, like other issues explored here, the long tail effect can be significant. Consider that multiple smaller systems can each have multiple differences in configuration and more. Multiple small issues have a nasty habit of turning into bigger ones.

It goes without saying that energy efficiency is essential to a company’s success. But I’ve witnessed too many situations where a drive for greater efficiency occurs without considering the longer view or subtle details. Those who do take a full look at their options, however, may be well served by the impact of Big Iron.


MCTS Training, MCITP Trainnig

Best Microsoft MCTS Certification, Microsoft MCITP Training at

10 security technologies destined for the dustbin

Systemic flaws and a rapidly shifting threatscape spell doom for many of today’s trusted security technologies

Perhaps nothing, not even the weather, changes as fast as computer technology. With that brisk pace of progress comes a grave responsibility: securing it.

Every wave of new tech, no matter how small or esoteric, brings with it new threats. The security community slaves to keep up and, all things considered, does a pretty good job against hackers, who shift technologies and methodologies rapidly, leaving last year’s well-recognized attacks to the dustbin.

Have you had to enable the write-protect notch on your floppy disk lately to prevent boot viruses or malicious overwriting? Have you had to turn off your modem to prevent hackers from dialing it at night? Have you had to unload your ansi.sys driver to prevent malicious text files from remapping your keyboard to make your next keystroke reformat your hard drive? Did you review your autoexec.bat and config.sys files to make sure no malicious entries were inserted to autostart malware?

Not so much these days — hackers have moved on, and the technology made to prevent older hacks like these is no longer top of mind. Sometimes we defenders have done such a good job that the attackers decided to move on to more fruitful options. Sometimes a particular defensive feature gets removed because the good guys determined it didn’t protect that well in the first place or had unexpected weaknesses.

If you, like me, have been in the computer security world long enough, you’ve seen a lot of security tech come and go. It’s almost to the point where you can start to predict what will stick and be improved and what will sooner or later become obsolete. The pace of change in attacks and technology alike mean that even so-called cutting-edge defenses, like biometric authentication and advanced firewalls, will eventually fail and go away. Surveying today’s defense technologies, here’s what I think is destined for the history books.

Biometric authentication is tantalizing cure-all for log-on security. After all, using your face, fingerprint, DNA, or some other biometric marker seems like the perfect log-on credential — to someone who doesn’t specialize in log-on authentication. As far as those experts are concerned, it’s not so much that biometric methods are rarely as accurate as most people think; it’s more that, once stolen, your biometric markers can’t be changed.

Take your fingerprints. Most people have only 10. Anytime your fingerprints are used as a biometric logon, those fingerprints — or, more accurately, the digital representations of those fingerprints — must be stored for future log-on comparison. Unfortunately, log-on credentials are far too often compromised or stolen. If the bad guy steals the digital representation of your fingerprints, how could any system tell the difference between your real fingerprints and their previously accepted digital representations?

In that case, the only solution might be to tell every system in the world that might rely on your fingerprints to not rely on your fingerprints, if that were even possible. The same is true for any other biometric marker. You’ll have a hard time repudiating your real DNA, face, retina scan, and so on if a bad player gets their hands on the digital representation of those biometric markers.

That doesn’t even take into account issues around systems that only allow you to logon if you use, say, your fingerprint when you can no longer reliably use your fingerprint. What then?

Biometric markers used in conjunction with a secret only you know (password, PIN, and so on) are one way to defeat hackers that have your biometric logon marker. Of course mental secrets can be captured as well, as happens often with nonbiometric two-factor log-on credentials like smartcards and USB key fobs. In those instances, admins can easily issue you a new physical factor and you can pick a new PIN or password. That isn’t the case when one of the factors is your body.

While biometric logons are fast becoming a trendy security feature, there’s a reason they aren’t — and won’t ever be — ubiquitous. Once people realize that biometric logons aren’t what they pretend to be, they will lose popularity and either disappear, always require a second form of authentication, or only be used when high-assurance identification is not needed.

Doomed security technology No. 2: SSL

Secure Socket Layer was invented by long-gone Netscape in 1995. For two decades it served us adequately. But if you haven’t heard, it is irrevocably broken and can’t be repaired, thanks to the Poodle attack. SSL’s replacement, TLS (Transport Layer Security), is slightly better. Of all the doomed security tech discussed in this article, SSL is the closest to be being replaced, as it should no longer be used.

The problem? Hundreds of thousands of websites rely on or allow SSL. If you disable all SSL — a common default in the latest versions of popular browsers — all sorts of websites don’t work. Or they will work, but only because the browser or application accepts “downleveling” to SSL. If it’s not websites and browsers, then it’s the millions of old SSH servers out there.

OpenSSH is seemingly constantly being hacked these days. While it’s true that about half of OpenSSH hacks have nothing to do with SSL, SSL vulnerabilities account for the other half. Millions of SSH/OpenSSH sites still use SSL even though they shouldn’t.

Worse, terminology among tech pros is contributing to the problem, as nearly everyone in the computer security industry calls TLS digital certificates “SSL certs” though they don’t use SSL. It’s like calling a copy machine a Xerox when it’s not that brand. If we’re going to hasten the world off SSL, we need to start calling TLS certs “TLS certs.

Make a vow today: Don’t use SSL ever, and call Web server certs TLS certs. That’s what they are or should be. The sooner we get rid of the word “SSL,” the sooner it will be relegated to history’s dustbin.

Doomed security technology No. 3: Public key encryption

This may surprise some people, but most of the public key encryption we use today — RSA, Diffie-Hellman, and so on — is predicted to be readable as soon as quantum computing and cryptography are figured out. Many, including this author, have been long (and incorrectly) predicting that usable quantum computing was mere years away. But when researchers finally get it working, most known public encryption ciphers, including the popular ones, will be readily broken. Spy agencies around the world have been saving encrypted secrets for years waiting for the big breakthrough — or, if you believe some rumors, they already have solved the problem and are reading all our secrets.

Some crypto experts, like Bruce Schneier, have long been dubious about the promise of quantum cryptography. But even the critics can’t dismiss the likelihood that, once it’s figured out, any secret encrypted by RSA, Diffie-Hellman, and even ECC are immediately readable.

That’s not to say there aren’t quantum-resistant cipher algorithms. There are a few, including lattice-based cryptography and Supersingular Isogeny Key Exchange. But if your public cipher isn’t one of those, you’re out of luck if and when quantum computing becomes widespread.

Doomed security technology No. 4: IPsec
When enabled, IPsec allows all network traffic between two or more points to be cryptographically protected for packet integrity and privacy, aka encrypted. Invented in 1993 and made an open standard in 1995, IPsec is widely supported by hundreds of vendors and used on millions of enterprise computers.

Unlike most of the doomed security defenses discussed in this article, IPsec works and works great. But its problems are two-fold.

First, although widely used and deployed, it has never reached the critical mass necessary to keep it in use for much longer. Plus, IPsec is complex and isn’t supported by all vendors. Worse, it can often be defeated by only one device in between the source and destination that does not support it — such as a gateway or load balancer. At many companies, the number of computers that get IPsec exceptions is greater than the number of computers forced to use it.

IPsec’s complexity also creates performance issues. When enabled, it can significantly slow down every connection using it, unless you deploy specialized IPsec-enabled hardware on both sides of the tunnel. Thus, high-volume transaction servers such as databases and most Web servers simply can’t afford to employ it. And those two types of servers are precisely where most important data resides. If you can’t protect most data, what good is it?

Plus, despite being a “common” open standard, IPsec implementations don’t typically work between vendors, another factor that has slowed down or prevented widespread adoption of IPsec.

But the death knell for IPsec is the ubiquity of HTTPS. When you have HTTPS enabled, you don’t need IPsec. It’s an either/or decision, and the world has spoken. HTTPS has won. As long as you have a valid TLS digital certificate and a compatible client, it works: no interoperability problems, low complexity. There is some performance impact, but it’s not noticeable to most users. The world is quickly becoming a default world of HTTPS. As that progresses, IPsec dies.

Doomed security technology No. 5: Firewalls

The ubiquity of HTTPS essentially spells the doom of the traditional firewall. I wrote about this in 2012, creating a mini-firestorm that won me invites to speak at conferences all over the world.

Some people would say I was wrong. Three years later, firewalls are still everywhere. True, but most aren’t configured and almost all don’t have the “least permissive, block-by-default” rules that make a firewall valuable in the first place. Most firewalls I come across have overly permissive rules. I often see “Allow All ANY ANY” rules, which essentially means the firewall is worse than useless. It’s doing nothing but slowing down network connections.

Anyway you define a firewall, it must include some portion that allows only specific, predefined ports in order to be useful. As the world moves to HTTPS-only network connections, all firewalls will eventually have only a few rules — HTTP/HTTPS and maybe DNS. Other protocols, such ads DNS, DHCP, and so on, will likely start using HTTPS-only too. In fact, I can’t imagine a future that doesn’t end up HTTPS-only. When that happens, what of the firewall?

The main protection firewalls offer is to secure against a remote attack on a vulnerable service. Remotely vulnerable services, usually exploited by one-touch, remotely exploitable buffer overflows, used to be among the most common attacks. Look at the Robert Morris Internet worm, Code Red, Blaster, and SQL Slammer. But when’s the last time you heard of a global, fast-acting buffer overflow worm? Probably not since the early 2000s, and none of those were as bad as the worms from the 1980s and 1990s. Essentially, if you don’t have an unpatched, vulnerable listening service, then you don’t need a traditional firewall — and right now you don’t. Yep, you heard me right. You don’t need a firewall.

Firewall vendors often write to tell me that their “advanced” firewall has features beyond the traditional firewall that makes theirs worth buying. Well, I’ve been waiting for more than two decades for “advanced firewalls” to save the day. It turns out they don’t. If they perform “deep packet inspection” or signature scanning, it either slows down network traffic too much, is rife with false positives, or scans for only a small subset of attacks. Most “advanced” firewalls scan for a few dozen to a few hundred attacks. These days, more than 390,000 new malware programs are registered every day, not including all the hacker attacks that are indistinguishable from legitimate activity.

Even when firewalls do a perfect job at preventing what they say they prevent, they don’t really work, given that they don’t stop the two biggest malicious attacks most organizations face on a daily basis: unpatched software and social engineering.

Put it this way: Every customer and person I know currently running a firewall is as hacked as someone who doesn’t. I don’t fault firewalls. Perhaps they worked so well back in the day that hackers moved on to other sorts of attacks. For whatever reason, firewalls are nearly useless today and have been trending in that direction for more than a decade.

Doomed security technology No. 6: Antivirus scanners

Depending on whose statistics you believe, malware programs currently number in the tens to hundreds of millions — an overwhelming fact that has rendered antivirus scanners nearly useless.

Not entirely useless, because they stop 80 to 99.9 percent of attacks against the average user. But the average user is exposed to hundreds of malicious programs every year; even with the best odds, the bad guy wins every once in a while. If you keep your PC free from malware for more than a year, you’ve done something special.

That isn’t to say we shouldn’t applaud antivirus vendors. They’ve done a tremendous job against astronomical odds. I can’t think of any sector that has had to adjust to the kinds of overwhelming progressive numbers and advances in technology since the late 1980s, when there were only a few dozen viruses to detect.

But what will really kill antivirus scanners isn’t this glut of malware. It’s whitelisting. Right now the average computer will run any program you install. That’s why malware is everywhere. But computer and operating system manufacturers are beginning to reset the “run anything” paradigm for the safety of their customers — a movement that is antithetical to antivirus programs, which allow everything to run unimpeded except for programs that contain one of the more than 500 million known antivirus signatures. “Run by default, block by exception” is giving way to “block by default, allow by exception.”

Of course, computers have long had whitelisting programs, aka application control programs. I reviewed some of the more popular products back in 2009. The problem: Most people don’t use whitelisting, even when it’s built in. The biggest roadblock? The fear of what users will do if they can’t install everything they want willy-nilly or the big management headache of having to approve every program that can be run on a user’s system.

But malware and hackers are getting more pervasive and worse, and vendors are responding by enabling whitelisting by default. Apple’s OS X introduced a near version of default whitelisting three years ago with Gatekeeper. iOS devices have had near-whitelisting for much longer in that they can run only approved applications from the App Store (unless the device is jailbroken). Some malicious programs have slipped by Apple, but the process has been incredibly successful at stopping the huge influx that normally follows popular OSes and programs.

Microsoft has long had a similar mechanism, through Software Restriction Policies and AppLocker, but an even stronger push is coming in Windows 10 with DeviceGuard. Microsoft’s Windows Store also offers the same protections as Apple’s App Store. While Microsoft won’t be enabling DeviceGuard or Windows Store-only applications by default, the features are there and are easier to use than before.

Once whitelisting becomes the default on most popular operating systems, it’s game over for malware and, subsequently, for antivirus scanners. I can’t say I’ll miss either.

Doomed security technology No. 7: Antispam filters

Spam still makes up more than half of the Internet’s email. You might not notice this anymore, thanks to antispam filters, which have reached levels of accuracy that antivirus vendors can only claim to deliver. Yet spammers keep spitting out billions of unwanted messages each day. In the end, only two things will ever stop them: universal, pervasive, high-assurance authentication and more cohesive international laws.

Spammers still exist mainly because we can’t easily catch them. But as the Internet matures, pervasive anonymity will be replaced by pervasive high-assurance identities. At that point, when someone sends you a message claiming to have a bag of money to mail you, you will be assured they are who they say they are.

High-assurance identities can only be established when all users are required to adopt two-factor (or higher) authentication to verify their identity, followed by identity-assured computers and networks. Every cog in between the sender and the receiver will have a higher level of reliability. Part of that reliability will be provided by pervasive HTTPS (discussed above), but it will ultimately require additional mechanisms at every stage of authentication to assure that when I say I’m someone, I really am that someone.

Today, almost anyone can claim to be anyone else, and there’s no universal way to verify that person’s claim. This will change. Almost every other critical infrastructure we rely on — transportation, power, and so on — requires this assurance. The Internet may be the Wild West right now, but the increasingly essential nature of the Internet as infrastructure virtually ensures that it will eventually move in the direction of identity assurance.

Meanwhile, the international border problem that permeates nearly every online-criminal prosecution is likely to be resolved in the near future. Right now, many major countries do not accept evidence or warrants issued by other countries, which makes arresting spammers (and other malicious actors) nearly impossible. You can collect all the evidence you like, but if the attacker’s home country won’t enforce the warrant, your case is toast.

As the Internet matures, however, countries that don’t help ferret out the Internet’s biggest criminals will be penalized. They may be placed on a blacklist. In fact, some already are. For example, many companies and websites reject all traffic originating from China, whether it’s legitimate or not. Once we can identify criminals and their home countries beyond repudiation, as outlined above, those home countries will be forced to respond or suffer penalties.

The heyday of the spammers where most of their crap reached your inbox is already over. Pervasive identities and international law changes will close the coffin lid on spam — and the security tech necessary to combat it.

Doomed security technology No. 8: Anti-DoS protections

Thankfully, the same pervasive identity protections mentioned above will be the death knell for denial-of-service (DoS) attacks and the technologies that have arisen to quell them.

These days, anyone can launch free Internet tools to overwhelm websites with billions of packets. Most operating systems have built-in anti-DoS attack protections, and more than a dozen vendors can protect your websites even when being hit by extraordinary amounts of bogus traffic. But the loss of pervasive anonymity will stop all malicious senders of DoS traffic. Once we can identify them, we can arrest them.

Think of it this way: Back in the 1920s there were a lot of rich and famous bank robbers. Banks finally beefed up their protection, and cops got better at identifying and arresting them. Robbers still hit banks, but they rarely get rich, and they almost always get caught, especially when they persist in robbing more banks. The same will happen to DoS senders. As soon as we can quickly identify them, the sooner they will disappear as the bothersome elements of society that they are.

Doomed security technology No. 9: Huge event logs

Computer security event monitoring and alerting is difficult. Every computer is easily capable of generating tens of thousands of events on its own each day. Collect them to a centralized logging database and pretty soon you’re talking petabytes of needed storage. Today’s event log management systems are often lauded for the vast size of their disk storage arrays.

The only problem: This sort of event logging doesn’t work. When nearly every collected event packet is worthless and goes unread, and the cumulative effect of all the worthless unread events is a huge storage cost, something has to give. Soon enough admins will require application and operating system vendors to give them more signal and less noise, by passing along useful events without the mundane log clutter. In other words, event log vendors will soon be bragging about how little space they take rather than how much.

Doomed security technology No. 10: Anonymity tools (not to mention anonymity and privacy)

Lastly, any mistaken vestige of anonymity and privacy will be completely wiped away. We already really don’t have it. The best book I can recommend on the subject is Bruce Schneier’s “Data and Goliath.” A quick read will scare you to death if you didn’t already realize how little privacy and anonymity you truly have.

Even hackers who think that hiding on Tor and other “darknets” give them some semblance of anonymity must understand how quickly the cops are arresting people doing bad things on those networks. Anonymous kingpin after anonymous kingpin ends up being arrested, identified in court, and serving real jail sentences with real jail numbers attached to their real identity.

The truth is, anonymity tools don’t work. Many companies, and certainly law enforcement, already know who you are. The only difference is that, in the future, everyone will know the score and stop pretending they are staying hidden and anonymous online.

I would love for a consumer’s bill of rights guaranteeing privacy to be created and passed, but past experience teaches me that too many citizens are more than willing to give up their right to privacy in return for supposed protection. How do I know? Because it’s already the standard everywhere but the Internet. You can bet the Internet is next.


MCTS Training, MCITP Trainnig

Best Microsoft MCTS Certification, Microsoft MCITP Training at

Is the cloud the right spot for your big data?

Is the cloud a good spot for big data?

That’s a controversial question, and the answer changes depending on who you ask.

Last week I attended the HP Big Data Conference in Boston and both an HP customer and an executive told me that big data isn’t a good fit for the public cloud.

CB Bohn is a senior database engineer at Etsy, and a user of HP’s Vertica database. The online marketplace uses the public cloud for some workloads, but its primary functions are run out of a co-location center, Bohn said. It doesn’t make sense for the company to lift and shift its Postgres, Vertica SQL and Hadoop workloads into the public cloud, he said. It would be a massive undertaking for the company to port all the data associated with those programs into the cloud. Then, once its transferred to the cloud, the company would have to pay ongoing costs to store it there. Meanwhile, the company has a co-lo facility already set up and expertise in house to manage the infrastructure required to run those programs. The cloud just isn’t a good fit for Etsy’s big data, Bohn says.

Chris Selland, VP of Business Development at HP’s Big Data software division, says most of the company’s customers aren’t using the cloud in a substantial way with big data. Perhaps that’s because HP’s big data cloud, named Helion, isn’t quite as mature as say Amazon Web Services or Microsoft Azure. But still, Selland said there are both technical challenges (like data portability, and data latency) along with non-technical reasons, such as company executives being more comfortable with the data not being the cloud.

Bohn isn’t totally against the cloud though. For quick, large processing jobs the cloud is great. “Spikey” workloads that need fast access to large amounts of compute resources are ideal for the cloud. But, if an organization has a constant need for compute and storage resources, it can be more efficient to buy commodity hardware and run it yourself.

Public cloud vendors like Amazon Web Services make the opposite argument. I asked CTO Werner Vogels about private clouds recently and he argued that businesses should not waste time on building out data center infrastructure when AWS can supply it to them. Bohn argues that it’s cheaper to just buy the equipment than to rent it over the long-term.

As the public cloud has matured, it’s clear there’s still a debate about what workloads the cloud is good for and which it’s not.

The real answer to this question is that it depends on the business. For startup companies who were born in the cloud and have all their data in the cloud, it will make sense to do your data processing in the cloud. For companies that have big data center footprints, or co-location infrastructure set up, then there may not be a reason to lift and shift to the cloud. Each business will have its own specific use cases, some of which may be good for the cloud, and others which may not be.


MCTS Training, MCITP Trainnig

Best Microsoft MCTS Certification, Microsoft MCITP Training at

The real dirt on programming certifications

Spotlight may be on Amazon, but tech jobs are high profit and high stress

It’s true. People working in Silicon Valley may cry at their desks, may be expected to respond to emails in the middle of the night and be in the office when they’d rather be sick in bed.

But that’s the price employees pay to work for some of the most successful and innovative tech companies in the world, according to industry analysts.

“It’s a pressure cooker for tech workers,” said Bill Reynolds, research director for Foote Partners LLC, an IT workforce research firm. “But for every disgruntled employee, someone will tell you it’s fine. This is the ticket to working in this area and they’re willing to pay it.”

The tech industry has been like this for years, he added.
Employees are either Type A personalities who thrive on the pressure, would rather focus on a project than get a full night’s sleep and don’t mind pushing or being pushed.

If that’s not who they are, they should get another job and probably in another industry.

“A lot of tech companies failed, and the ones that made it, made it based on a driven culture. No one made it working 9 to 5,” said John Challenger, CEO of Challenger, Gray & Christmas, an executive outplacement firm. “Silicon Valley has been the vanguard of this type of work culture. It can get out of control. It can be too much and people can burn out. But it’s who these companies are.”

Work culture at tech companies, specifically at Amazon, hit the spotlight earlier this week when the New York Times ran a story on the online retailer and what it called its “bruising workplace.”

The story talked about employees crying at their desks, working 80-plus-hour weeks and being expected to work when they’re not well or after a family tragedy.

“At Amazon, workers are encouraged to tear apart one another’s ideas in meetings, toil long and late (emails arrive past midnight, followed by text messages asking why they were not answered), and held to standards that the company boasts are “unreasonably high,” the article noted.

In response, CEO Jeff Bezos sent a memo to employees saying he didn’t recognize the company described in the Times article.

“The article doesn’t describe the Amazon I know or the caring Amazonians I work with every day,” Bezos wrote. “More broadly, I don’t think any company adopting the approach portrayed could survive, much less thrive, in today’s highly competitive tech hiring market.”

Bezos hasn’t been the only one at Amazon to respond. Nick Ciubotariu, head of Infrastructure development at, wrote a piece on LinkedIn, taking on the Times article.

“During my 18 months at Amazon, I’ve never worked a single weekend when I didn’t want to. No one tells me to work nights,” he wrote. “We work hard, and have fun. We have Nerf wars, almost daily, that often get a bit out of hand. We go out after work. We have ‘Fun Fridays.’ We banter, argue, play video games and Foosball. And we’re vocal about our employee happiness.”

Amazon has high expectations of its workers because it’s one of the largest and most successful companies in the world, according to industry analysts.

The company, which started as an online book store, now sells everything from cosmetics to bicycles and toasters. With a valuation of $250 billion, Amazon even surpassed mega retailer Walmart this summer as the biggest retailer in the U.S.

With that kind of success comes a lot of pressure to stay on top and to come up with new, innovative ways to keep customers happy.

That kind of challenge can lead to a stressful workplace where employees are called on to work long hours and to outwork competitors’ own employees.

It’s just the way of the beast, according to Victor Janulaitis, CEO of Janco Associates Inc., a management consulting firm.

“If you go to work for a high-powered company where you have a chance of being a millionaire in a few years, you are going to work 70 to 80 hours a week,” he said. “You are going to have to be right all the time and you are going to be under a lot of stress. Your regular Joe is really going to struggle there.”

This kind of work stress isn’t relegated to Amazon alone. Far from it, Janulaitis said.

“I think it’s fairly widespread in any tech company that is successful,” he noted. “It’s just a very stressful environment. You’re dealing with a lot of money and a lot of Type A personalities who want to get things done. If you’re not a certain type of person, you’re not going to make it. It’s much like the Wild West. They have their own rules.”

Of course, tech companies, whether Amazon, Google, Apple or Facebook, are known to work people hard, going back to the days when IBM was launching its first PCs and Microsoft was making its Office software ubiquitous around the world.

However, tech companies also are known for giving their employees perks that people working in other industries only dream of.

Google, for instance, has world-class chefs cooking free food for its employees, while also setting up nap pods, meditation classes and sandy volleyball courts.

Netflix recently made global headlines for offering mothers and fathers unlimited time off for up to a year after the birth or adoption of a child.

It’s the yin and yang of Silicon Valley, said Megan Slabinski, district president of Robert Half Technology, a human resources consulting firm.

“All those perks – the ping pong tables, the free snacks, the free day care — that started in the tech industry come with the job because the job is so demanding,” she said. “There’s a level of demand in the tech industry that translates to the work environment.”

When asked if Amazon is any harder on its employees than other major tech companies, Slabinski laughed.

“Amazon isn’t different culturally from other IT companies,” she said. “I’ve been doing this for 16 years. You see the good, the bad and the ugly. If you are working for tech companies, the expectation is you are going to work really hard. This is bleeding-edge technology, and the trade-off is there’s less work-life balance. The people who thrive in this industry, thrive on being on the bleeding edge. If you can’t take it, you go into another industry.”

Janulaitis noted that top-tier employees are always chased by other companies, but middle-tier workers – those who are doing a good job but might not be the brightest stars of the workforce – are hunkering down and staying put.

Fears of a still jittery job market have convinced a lot of people to keep their heads down, put up with whatever their managers ask of them and continue to be able to pay their mortgages, especially if they live in pricey Silicon Valley.

That, said Janulaitis, makes companies more apt to ask even more from their employees, who know they’re likely stuck where they are for now.

“Once the job market changes, turnover will increase significantly in the IT field,” he said.

Like stock traders working under extreme pressure on Wall Street or medical interns working 36-hour shifts, the tech industry is a high-stress environment – one that’s not suited to every worker.

“If you can’t live with that pressure, you should go somewhere else,” said Reynolds. “For people in Silicon Valley, it’s who they are. It’s the kind of person they are.”

MCTS Training, MCITP Trainnig

Best Microsoft MCTS Certification, Microsoft MCITP Training at


Sorriest technology companies of 2015

A rundown of the year in apologies from tech vendors and those whose businesses rely heavily on tech.

Sorry situation
Despite all the technology advances that have rolled out this year, it’s also been a sorry state of affairs among leading network and computing vendors, along with businesses that rely heavily on technology. Apple, Google, airlines and more have issued tech-related mea culpas in 2015…

Sony says Sorry by saying Thanks
Network outages caused by DDoS attacks spoiled holiday fun for those who got new PlayStation 4 games and consoles, so Sony kicked off 2015 with an offer of 10% off new purchases, plus an extended free trial for some.

NSA’s backdoor apology
After getting outted by Microsoft and later Edward Snowden for allowing backdoors to be inserted into devices via a key security standard, the NSA sort of apologized. NSA Director of Research Michael Wertheimer, in writing for the Notices of the American Mathematical Society, acknowledges mistakes were made in “The Mathematics Community and the NSA.” He wrote in part: “With hindsight, NSA should have ceased supporting the Dual_EC_DRBG algorithm immediately after security researchers discovered the potential for a trapdoor.”

You probably forgot about this flag controversy
China’s big WeChat messaging service apologized in January for bombarding many of its hundreds of millions of users – and not just those in the United States — with Stars and Stripes icons whenever they typed in the words “civil rights” on Martin Luther King, Jr. Day. WeChat also took heat for not offering any sort of special icons when users typed in patriotic Chinese terms. The special flag icons were only supposed to have been seen by US users of the service.

Go Daddy crosses the line
Web site domain provider Go Daddy as usual relied on scantily clad women as well as animals to spread its message during this past winter’s Super Bowl. The surprising thing is that the animals are what got the company in hot water this time. The company previewed an ad that was supposed to parody Budweiser commercials, but its puppy mill punch line didn’t have many people laughing, so the CEO wound up apologizing and pulling the ad.

Name calling at Comcast
Comcast scrambled to make right after somehow changing the name of a customer on his bill to “(expletive… rhymes with North Pole) Brown” from his actual name, Ricardo Brown. The change took place after Brown’s wife called Comcast to discontinue cable service. The service provider told a USA Today columnist that it was investigating the matter, but in the meantime was refunding the Browns for two years of previous service.

Where to start with Google?
Google’s Department of Apologies has been busy this year: In January the company apologized when its translation services spit out anti-gay slurs in response to searches on the terms “gay” and “homosexual.” In May, Google apologized after a Maps user embedded an image of the Android mascot urinating on Apple’s logo. This summer, Google has apologized for its new Photos app mislabeling African Americans as “gorillas” and for Google Niantic Labs’ Ingress augmented reality game including the sites of former Nazi concentration camps as points of interest.

Carnegie Mellon admissions SNAFU
Carnegie Mellon University’s Computer Science School in February apologized after it mistakenly accepted 800 applicants to its grad problem, only to send out rejection notices hours later. The irony of a computer glitch leading to this problem at such a renowned computer science school was lost on no one…

Lenovo Superfish debacle
Lenovo officials apologized in February after it was discovered that Superfish adware packaged with some of its consumer notebooks was not only a pain for users but also included a serious security flaw resulting from interception of encrypted traffic. “I have a bunch of very embarrassed engineers on my staff right now,” said Lenovo CTO Peter Hortensius. “They missed this.” Lenovo worked with Microsoft and others to give users tools to rid themselves of Superfish.

Apple apologizes for tuning out customers
Apple apologized in March for an 11-hour iTunes service and App Store outage that it blamed on “an internal DNS error at Apple,” in a statement to CNBC.

Blame the iPads
American Airlines in April apologized after digital map application problems on pilot iPads delayed dozens of flights over a two-day period. The airline did stress that the problem was a third-party app, not the Apple products themselves.

Locker awakened
The creator of a strain of ransomware called Locker apologized after he “woke up” the malware, which encrypted files on infected devices and asked for money to release them. A week after the ransomware was activated, the creator apparently had a changed of heart released decryption keys needed by victims to unlock their systems.

HTC wants to be Hero
Phonemaker HTC’s CEO Cher Wang, according to the Taipei Times in June, apologized to investors in June after the company’s new One M9 flagship phone failed to boost sales. “HTC’s recent performance has let people down,” said Wang, pointing to better times ahead with the planned fall release of a new phone dubbed Hero.

Ketchup for adults only
Ketchup maker Heinz apologized in June after an outdated contest-related QR code on its bottles sent a German man to an X-rated website. Meanwhile, the website operator offered the man who complained a free year’s worth of access, which he declined.

Livid Reddit users push out interim CEO
Interim Reddit CEO Ellen Pao apologized in July (“we screwed up”) after the online news aggregation site went nuts over the sudden dismissal of an influential employee known for her work on the site’s popular Ask Me Anything section. Pao shortly afterwards resigned from her post following continued demands for her ouster by site users.

Blame the router
United Airlines apologized (“we experienced a network connectivity issue. We are working to resolve and apologize for any inconvenience.”) in July after being forced to ground its flights for two hours one morning due to a technology issue that turned out to be router-related. United has suffered a string of tech glitches since adopting Continental’s passenger management system a few years back following its acquisition of the airline.

Billion dollar apology
Top Toshiba executives resigned in July following revelations that the company had systematically padded its profits by more than $1 billion over a six-year period. “I recognize there has been the most serious damage to our brand image in our 140-year history,” said outgoing President Hisao Tanaka, who is to be succeeded by Chairman Masashi Muromachi. “We take what the committee has pointed out very seriously, and it is I and others in management who bear responsibility.”

MCTS Training, MCITP Trainnig

Best Microsoft MCTS Certification, Microsoft MCITP Training at



Ultimate guide to Raspberry Pi operating systems, part 1

Raspberry Pi
Since we published a roundup of 10 Raspberry Pi operating systems the number of choices has exploded. In this piece I’m including every option I could find (and for you pickers of nits, yes, I’m counting individual Linux distros as individual operating systems, so sue me). If you know of anything I’ve missed or a detail that’s wrong, please drop me a note at and I’ll update the piece and give you a shout out.

Want to know immediately when the next installment of this guide is published? Sign up and you’ll be the first to know.

Now on with the awesomeness …

Adafruit – Occidentalis v0.3
Occidentalis v0.3 is the result of running Adafruit’s Pi Bootstrapper on a Raspbian installation to build a platform for teaching electronics using the Raspberry Pi. Arguably not a true distro (the previous versions were) it’s included because it’s kind of cool.

Arch Linux ARM
Arch Linux ARM is a fork of Arch Linux built for ARM processors. This distro has a long history of being used in a wide range of products, including the Pogoplug as well as the Raspberry Pi. It’s known for being both fast and stable. There is no default desktop but above, I show the option of Openbox.

BerryTerminal has not been updated for several years: “BerryTerminal is a minimal Linux distribution designed to turn the Raspberry Pi mini computer into a low-cost thin client. It allows users to login to a central Edubuntu or other [Linux Terminal Server Project] server, and run applications on the central server.”

DarkELEC: “None of the currently available solutions do a perfect job running XBMC on the Pi, however OpenELEC comes by far the closest, in spite of its locked down nature. [The DarkELEC] fork aims to remedy the very few flaws in its implementation and to focus 100% on the Pi, while also sticking to the upstream and incorporating its updates.”

Debian 8 (“Jessie”)
Debian 8 (“Jessie”) is the latest and greatest version of Debian and Sjoerd Simons of Collabora appears to be the first person to get it running on the Raspberry Pi 2 back in February this year. As of this writing, there isn’t an “official”release of Debian 8 for the Raspberry Pi so, if you go down this path, expect a few bumps (and complexities) on the way.

DietPi: “At its core, DietPi is the go to image for a minimal Raspbian/Debian Server install. We’ve stripped down and removed everything from the official Raspbian image to give us a bare minimal Raspbian server image that we call DietPi-Core.” DietPi is optimized for all Pi models and has a 120MB compressed image, fits on a 1GB or greater SD card, has only 11 running processes after boot, requires just 16MB of memory after boot, and, “unlike most Raspbian minimal images, ours includes full Wifi support.” An LXDE desktop is optional.

Fedora Remix (Pidora)
Fedora Remix (Pidora): Pidora is a Fedora Remix, a customized version of the Unix-like Fedora system, running on the ARM-based Raspberry Pi single board computer and it moves faster than a politician taking a donation. First released in 2003 Fedora has a long history and is noted for its stability. Given that there are thousands of packages available in the Pidora repository you’ll be able to find pretty much any functionality or service you need for your project.

GeeXboX ARM is a free and Open Source Media Center Linux distribution for embedded devices and desktop computers. GeeXboX is not an application, it’s a full-featured OS that can be booted from a LiveCD, from a USB key, an SD/MMC card or installed on an HDD. The core media delivery application os XBMC Media Center 12.2 “Frodo”.

IPFire is a specialized version of Linux that operates as a firewall. Designed to be highly secure and fast, it’s managed through a Web-based interface.

Kali Linux
Kali Linux is one of my favorite flavors of Linux because of its excellent collection of penetration testing and diagnostic tools (plus it has a great logo). Being able to run this bad boy on a Raspberry Pi means you can have your own custom pen tester in your pocket.

Lessbian 8.1 (“Raptor”)
Lessbian 8.1 (“Raptor”): A stripped down bare minimal Debian “Jessie”. The goal of Lessbian is to “provide a small and fast jessie image for servers and wifi security testing without the madness of system.” This release is described as “A bootable wifi system optimized for throughput, performance, and encryption”and it’s a great platform for running a Tor Relay.

Minepeon: There’s gold in them thar’ BitCoin mines! You can get it out using the Minepeon operating system based on Linux and running on a Raspberry Pi. Of course you’re going to need a lot of machines to get your digital “quan”given how much more “work”is needed to mine BitCoin today, but given the price of the Raspberry Pi you won’t go broke assembling a roomful of miners. Show me the digital money!

Moebius: A minimal ARM HF distribution that needs just 20Mb of RAM for the entire operating system and fits on a 128MB SD card. Version 2 is current stable version. An LXDE desktop is optional.

nOS: Based on Ubuntu and the KDE, this distro has been abandoned: “Development of nOS has stopped, existing versions will continue to work and receive updates from the package manufacturers until April 2019. The only things that will no longer be issued are updates for nOS specific software and the monthly image releases (they haven’t been going for a while anyway).”

OpenELEC, an acronym for Open Embedded Linux Entertainment Center, is a Linux-based OS that runs the popular XBMC open source digital media center software. The first release of OpenELEC was in 2013 and, according to the OpenELEC Wiki, “Installing OpenELEC for Raspberry Pi from a Linux computer is a very simple process and whether you’re new to Linux or a hardened *NIX user, you shouldn’t have any problems.”

OpenWrt for Raspberry Pi
OpenWrt for Raspberry Pi is “a Linux distribution for embedded devices.” Systems based on OpenWrt are most often used as routers and, with something like 3,500 optional add-on packages, its features can be tailored in pretty much anyway imaginable. Want an ultraportable, incredibly tiny wireless router that can be run anywhere? OpenWrt on a Raspberry Pi running off a battery with a USB WiFi dongle can only be described as “epic.”

Raspberry Digital Signage
Raspberry Digital Signage is based on Debian Linux running on a Raspberry Pi and used in Web kiosks and digital signage (including digital photo frames). A really well thought out system, Digital Signage is designed to be easily administered while being as “hacker-proof”as possible.

Raspberry Pi Thin Client
Raspberry Pi Thin Client: Creates a very low price thin client that supports Microsoft RDC, Citrix ICA, VMWare View, OpenNX and SPICE.

Raspbian Pisces R3
Raspbian Pisces R3: Another non-official distro, Raspbian Pisces created by Mike Thompson, is an SD image of Raspbian and creates a minimal Debian installation with the LXDE desktop.

Raspbian Server Edition
Raspbian Server Edition: A stripped-down version of Raspbian with some extra packages that boots to a command prompt. It is an excellent tool to use for testing hard float compilations and running benchmarks.

Raspbmc: Yet another distro that is designed for the popular XBMC open source digital media center, Raspbmc is lightweight and robust.

RaspEX (Edition 150706)
RaspEX (Edition 150706): RaspEX is a full Linux desktop system with LXDE and many other useful programs pre-installed. Chromium is used as Web Browser and Synaptic as Package Manager. RaspEX uses Ubuntu’s software repositories so you can install thousands of extra packages if you want.

Raspian Debian 7.8 (“Wheezy”)
Raspian Debian 7.8 (“Wheezy”): The Raspian Debian “Wheezy”distro for the Raspberry Pi is a fully functional Debian Wheezy installation containing the LXDE desktop, the Epiphany browser, Wolfram Mathematica, and Scratch. It supports the Raspberry Pi and the Raspberry Pi 2 and is the current Debian version supported by the Raspberry Pi Foundation.

Red Sleeve Linux
Red Sleeve Linux: “RedSleeve Linux is a 3rd party ARM port of a Linux distribution of a Prominent North American Enterprise Linux Vendor (PNAELV). They object to being referred to by name in the context of clones and ports of their distribution, but if you are aware of CentOS and Scientific Linux, you can probably guess what RedSleeve is based on. RedSleeve is different from CentOS and Scientific Linux in that it isn’t a mere clone of the upstream distribution it is based on –it is a port to a new platform, since the upstream distribution does not include a version for ARM.”

RISC OS Pi: Originally developed and released 1987 by UK-based Acorn Computers Ltd. RISC OS is, as the RISC OS Web site claims, “its own thing –a very specialized ARM-based operating system… if you’ve not used it before, you will find it doesn’t behave quite the same way as anything else.”. RISC OS Pi has been available on the Raspberry Pi since 2012.

SliTaz GNU/Linux Raspberry Pi
The SliTaz GNU/Linux Raspberry Pi distribution is “a small operating system for a small computer! The goal is to provide a fast, minimal footprint and optimized distro for the Raspberry Pi. You can setup a wide range of system types, from servers to desktops and learning platforms.”

Windows 10 IoT Core Edition
Windows 10 IoT Core Edition’s GUI stack is limited to Microsoft’s Universal App Platform so there’s no Windows desktop or even a command prompt. With PowerShell remoting you get a PowerShell terminal from which you can run Windows commands and see the output of native Win32 apps. Currently available as a preview version, there’s no support for Wi-Fi or Bluetooth.

In our next installment of Network World’s Ultimate Guide to Raspberry Pi Operating Systems we’ll be covering a whole new collection: Bodhi, Commodore Pi, FreeBSD, Gentoo, ha-pi, I2Pberry, Kano OS, MINIBIAN, motionPie, Nard, NetBSD, OSMC, PiBang Linux, PiBox, PiMAME, PiParted, Plan 9, PwnPi, RasPlex, Slackware ARM, SlaXBMCRPi, slrpi, Tiny Core Linux, Ubuntu, Volumio, XBian, and more.

Want to know immediately when the next installment is published? Sign up and you’ll be the first to know.
Want more Pi? Check out 10 Reasons why the Raspberry Pi 2 Model B is a killer product and MIPS Creator CI20: Sort of a challenge to the Raspberry Pi 2 Model B. What could be the next RPi? Check out Endless: A computer the rest of the world can afford and How low can we go? Introducing the $9 Linux computer!

MCTS Training, MCITP Trainnig

Best Microsoft MCTS Certification, Microsoft MCITP Training at