10 security technologies destined for the dustbin

Systemic flaws and a rapidly shifting threatscape spell doom for many of today’s trusted security technologies

Perhaps nothing, not even the weather, changes as fast as computer technology. With that brisk pace of progress comes a grave responsibility: securing it.

Every wave of new tech, no matter how small or esoteric, brings with it new threats. The security community slaves to keep up and, all things considered, does a pretty good job against hackers, who shift technologies and methodologies rapidly, leaving last year’s well-recognized attacks to the dustbin.

Have you had to enable the write-protect notch on your floppy disk lately to prevent boot viruses or malicious overwriting? Have you had to turn off your modem to prevent hackers from dialing it at night? Have you had to unload your ansi.sys driver to prevent malicious text files from remapping your keyboard to make your next keystroke reformat your hard drive? Did you review your autoexec.bat and config.sys files to make sure no malicious entries were inserted to autostart malware?

Not so much these days — hackers have moved on, and the technology made to prevent older hacks like these is no longer top of mind. Sometimes we defenders have done such a good job that the attackers decided to move on to more fruitful options. Sometimes a particular defensive feature gets removed because the good guys determined it didn’t protect that well in the first place or had unexpected weaknesses.

If you, like me, have been in the computer security world long enough, you’ve seen a lot of security tech come and go. It’s almost to the point where you can start to predict what will stick and be improved and what will sooner or later become obsolete. The pace of change in attacks and technology alike mean that even so-called cutting-edge defenses, like biometric authentication and advanced firewalls, will eventually fail and go away. Surveying today’s defense technologies, here’s what I think is destined for the history books.

Biometric authentication is tantalizing cure-all for log-on security. After all, using your face, fingerprint, DNA, or some other biometric marker seems like the perfect log-on credential — to someone who doesn’t specialize in log-on authentication. As far as those experts are concerned, it’s not so much that biometric methods are rarely as accurate as most people think; it’s more that, once stolen, your biometric markers can’t be changed.

Take your fingerprints. Most people have only 10. Anytime your fingerprints are used as a biometric logon, those fingerprints — or, more accurately, the digital representations of those fingerprints — must be stored for future log-on comparison. Unfortunately, log-on credentials are far too often compromised or stolen. If the bad guy steals the digital representation of your fingerprints, how could any system tell the difference between your real fingerprints and their previously accepted digital representations?

In that case, the only solution might be to tell every system in the world that might rely on your fingerprints to not rely on your fingerprints, if that were even possible. The same is true for any other biometric marker. You’ll have a hard time repudiating your real DNA, face, retina scan, and so on if a bad player gets their hands on the digital representation of those biometric markers.

That doesn’t even take into account issues around systems that only allow you to logon if you use, say, your fingerprint when you can no longer reliably use your fingerprint. What then?

Biometric markers used in conjunction with a secret only you know (password, PIN, and so on) are one way to defeat hackers that have your biometric logon marker. Of course mental secrets can be captured as well, as happens often with nonbiometric two-factor log-on credentials like smartcards and USB key fobs. In those instances, admins can easily issue you a new physical factor and you can pick a new PIN or password. That isn’t the case when one of the factors is your body.

While biometric logons are fast becoming a trendy security feature, there’s a reason they aren’t — and won’t ever be — ubiquitous. Once people realize that biometric logons aren’t what they pretend to be, they will lose popularity and either disappear, always require a second form of authentication, or only be used when high-assurance identification is not needed.

Doomed security technology No. 2: SSL

Secure Socket Layer was invented by long-gone Netscape in 1995. For two decades it served us adequately. But if you haven’t heard, it is irrevocably broken and can’t be repaired, thanks to the Poodle attack. SSL’s replacement, TLS (Transport Layer Security), is slightly better. Of all the doomed security tech discussed in this article, SSL is the closest to be being replaced, as it should no longer be used.

The problem? Hundreds of thousands of websites rely on or allow SSL. If you disable all SSL — a common default in the latest versions of popular browsers — all sorts of websites don’t work. Or they will work, but only because the browser or application accepts “downleveling” to SSL. If it’s not websites and browsers, then it’s the millions of old SSH servers out there.

OpenSSH is seemingly constantly being hacked these days. While it’s true that about half of OpenSSH hacks have nothing to do with SSL, SSL vulnerabilities account for the other half. Millions of SSH/OpenSSH sites still use SSL even though they shouldn’t.

Worse, terminology among tech pros is contributing to the problem, as nearly everyone in the computer security industry calls TLS digital certificates “SSL certs” though they don’t use SSL. It’s like calling a copy machine a Xerox when it’s not that brand. If we’re going to hasten the world off SSL, we need to start calling TLS certs “TLS certs.

Make a vow today: Don’t use SSL ever, and call Web server certs TLS certs. That’s what they are or should be. The sooner we get rid of the word “SSL,” the sooner it will be relegated to history’s dustbin.

Doomed security technology No. 3: Public key encryption

This may surprise some people, but most of the public key encryption we use today — RSA, Diffie-Hellman, and so on — is predicted to be readable as soon as quantum computing and cryptography are figured out. Many, including this author, have been long (and incorrectly) predicting that usable quantum computing was mere years away. But when researchers finally get it working, most known public encryption ciphers, including the popular ones, will be readily broken. Spy agencies around the world have been saving encrypted secrets for years waiting for the big breakthrough — or, if you believe some rumors, they already have solved the problem and are reading all our secrets.

Some crypto experts, like Bruce Schneier, have long been dubious about the promise of quantum cryptography. But even the critics can’t dismiss the likelihood that, once it’s figured out, any secret encrypted by RSA, Diffie-Hellman, and even ECC are immediately readable.

That’s not to say there aren’t quantum-resistant cipher algorithms. There are a few, including lattice-based cryptography and Supersingular Isogeny Key Exchange. But if your public cipher isn’t one of those, you’re out of luck if and when quantum computing becomes widespread.

Doomed security technology No. 4: IPsec
When enabled, IPsec allows all network traffic between two or more points to be cryptographically protected for packet integrity and privacy, aka encrypted. Invented in 1993 and made an open standard in 1995, IPsec is widely supported by hundreds of vendors and used on millions of enterprise computers.

Unlike most of the doomed security defenses discussed in this article, IPsec works and works great. But its problems are two-fold.

First, although widely used and deployed, it has never reached the critical mass necessary to keep it in use for much longer. Plus, IPsec is complex and isn’t supported by all vendors. Worse, it can often be defeated by only one device in between the source and destination that does not support it — such as a gateway or load balancer. At many companies, the number of computers that get IPsec exceptions is greater than the number of computers forced to use it.

IPsec’s complexity also creates performance issues. When enabled, it can significantly slow down every connection using it, unless you deploy specialized IPsec-enabled hardware on both sides of the tunnel. Thus, high-volume transaction servers such as databases and most Web servers simply can’t afford to employ it. And those two types of servers are precisely where most important data resides. If you can’t protect most data, what good is it?

Plus, despite being a “common” open standard, IPsec implementations don’t typically work between vendors, another factor that has slowed down or prevented widespread adoption of IPsec.

But the death knell for IPsec is the ubiquity of HTTPS. When you have HTTPS enabled, you don’t need IPsec. It’s an either/or decision, and the world has spoken. HTTPS has won. As long as you have a valid TLS digital certificate and a compatible client, it works: no interoperability problems, low complexity. There is some performance impact, but it’s not noticeable to most users. The world is quickly becoming a default world of HTTPS. As that progresses, IPsec dies.

Doomed security technology No. 5: Firewalls

The ubiquity of HTTPS essentially spells the doom of the traditional firewall. I wrote about this in 2012, creating a mini-firestorm that won me invites to speak at conferences all over the world.

Some people would say I was wrong. Three years later, firewalls are still everywhere. True, but most aren’t configured and almost all don’t have the “least permissive, block-by-default” rules that make a firewall valuable in the first place. Most firewalls I come across have overly permissive rules. I often see “Allow All ANY ANY” rules, which essentially means the firewall is worse than useless. It’s doing nothing but slowing down network connections.

Anyway you define a firewall, it must include some portion that allows only specific, predefined ports in order to be useful. As the world moves to HTTPS-only network connections, all firewalls will eventually have only a few rules — https:/HTTPS and maybe DNS. Other protocols, such ads DNS, DHCP, and so on, will likely start using HTTPS-only too. In fact, I can’t imagine a future that doesn’t end up HTTPS-only. When that happens, what of the firewall?

The main protection firewalls offer is to secure against a remote attack on a vulnerable service. Remotely vulnerable services, usually exploited by one-touch, remotely exploitable buffer overflows, used to be among the most common attacks. Look at the Robert Morris Internet worm, Code Red, Blaster, and SQL Slammer. But when’s the last time you heard of a global, fast-acting buffer overflow worm? Probably not since the early 2000s, and none of those were as bad as the worms from the 1980s and 1990s. Essentially, if you don’t have an unpatched, vulnerable listening service, then you don’t need a traditional firewall — and right now you don’t. Yep, you heard me right. You don’t need a firewall.

Firewall vendors often write to tell me that their “advanced” firewall has features beyond the traditional firewall that makes theirs worth buying. Well, I’ve been waiting for more than two decades for “advanced firewalls” to save the day. It turns out they don’t. If they perform “deep packet inspection” or signature scanning, it either slows down network traffic too much, is rife with false positives, or scans for only a small subset of attacks. Most “advanced” firewalls scan for a few dozen to a few hundred attacks. These days, more than 390,000 new malware programs are registered every day, not including all the hacker attacks that are indistinguishable from legitimate activity.

Even when firewalls do a perfect job at preventing what they say they prevent, they don’t really work, given that they don’t stop the two biggest malicious attacks most organizations face on a daily basis: unpatched software and social engineering.

Put it this way: Every customer and person I know currently running a firewall is as hacked as someone who doesn’t. I don’t fault firewalls. Perhaps they worked so well back in the day that hackers moved on to other sorts of attacks. For whatever reason, firewalls are nearly useless today and have been trending in that direction for more than a decade.

Doomed security technology No. 6: Antivirus scanners

Depending on whose statistics you believe, malware programs currently number in the tens to hundreds of millions — an overwhelming fact that has rendered antivirus scanners nearly useless.

Not entirely useless, because they stop 80 to 99.9 percent of attacks against the average user. But the average user is exposed to hundreds of malicious programs every year; even with the best odds, the bad guy wins every once in a while. If you keep your PC free from malware for more than a year, you’ve done something special.

That isn’t to say we shouldn’t applaud antivirus vendors. They’ve done a tremendous job against astronomical odds. I can’t think of any sector that has had to adjust to the kinds of overwhelming progressive numbers and advances in technology since the late 1980s, when there were only a few dozen viruses to detect.

But what will really kill antivirus scanners isn’t this glut of malware. It’s whitelisting. Right now the average computer will run any program you install. That’s why malware is everywhere. But computer and operating system manufacturers are beginning to reset the “run anything” paradigm for the safety of their customers — a movement that is antithetical to antivirus programs, which allow everything to run unimpeded except for programs that contain one of the more than 500 million known antivirus signatures. “Run by default, block by exception” is giving way to “block by default, allow by exception.”

Of course, computers have long had whitelisting programs, aka application control programs. I reviewed some of the more popular products back in 2009. The problem: Most people don’t use whitelisting, even when it’s built in. The biggest roadblock? The fear of what users will do if they can’t install everything they want willy-nilly or the big management headache of having to approve every program that can be run on a user’s system.

But malware and hackers are getting more pervasive and worse, and vendors are responding by enabling whitelisting by default. Apple’s OS X introduced a near version of default whitelisting three years ago with Gatekeeper. iOS devices have had near-whitelisting for much longer in that they can run only approved applications from the App Store (unless the device is jailbroken). Some malicious programs have slipped by Apple, but the process has been incredibly successful at stopping the huge influx that normally follows popular OSes and programs.

Microsoft has long had a similar mechanism, through Software Restriction Policies and AppLocker, but an even stronger push is coming in Windows 10 with DeviceGuard. Microsoft’s Windows Store also offers the same protections as Apple’s App Store. While Microsoft won’t be enabling DeviceGuard or Windows Store-only applications by default, the features are there and are easier to use than before.

Once whitelisting becomes the default on most popular operating systems, it’s game over for malware and, subsequently, for antivirus scanners. I can’t say I’ll miss either.

Doomed security technology No. 7: Antispam filters

Spam still makes up more than half of the Internet’s email. You might not notice this anymore, thanks to antispam filters, which have reached levels of accuracy that antivirus vendors can only claim to deliver. Yet spammers keep spitting out billions of unwanted messages each day. In the end, only two things will ever stop them: universal, pervasive, high-assurance authentication and more cohesive international laws.

Spammers still exist mainly because we can’t easily catch them. But as the Internet matures, pervasive anonymity will be replaced by pervasive high-assurance identities. At that point, when someone sends you a message claiming to have a bag of money to mail you, you will be assured they are who they say they are.

High-assurance identities can only be established when all users are required to adopt two-factor (or higher) authentication to verify their identity, followed by identity-assured computers and networks. Every cog in between the sender and the receiver will have a higher level of reliability. Part of that reliability will be provided by pervasive HTTPS (discussed above), but it will ultimately require additional mechanisms at every stage of authentication to assure that when I say I’m someone, I really am that someone.

Today, almost anyone can claim to be anyone else, and there’s no universal way to verify that person’s claim. This will change. Almost every other critical infrastructure we rely on — transportation, power, and so on — requires this assurance. The Internet may be the Wild West right now, but the increasingly essential nature of the Internet as infrastructure virtually ensures that it will eventually move in the direction of identity assurance.

Meanwhile, the international border problem that permeates nearly every online-criminal prosecution is likely to be resolved in the near future. Right now, many major countries do not accept evidence or warrants issued by other countries, which makes arresting spammers (and other malicious actors) nearly impossible. You can collect all the evidence you like, but if the attacker’s home country won’t enforce the warrant, your case is toast.

As the Internet matures, however, countries that don’t help ferret out the Internet’s biggest criminals will be penalized. They may be placed on a blacklist. In fact, some already are. For example, many companies and websites reject all traffic originating from China, whether it’s legitimate or not. Once we can identify criminals and their home countries beyond repudiation, as outlined above, those home countries will be forced to respond or suffer penalties.

The heyday of the spammers where most of their crap reached your inbox is already over. Pervasive identities and international law changes will close the coffin lid on spam — and the security tech necessary to combat it.

Doomed security technology No. 8: Anti-DoS protections

Thankfully, the same pervasive identity protections mentioned above will be the death knell for denial-of-service (DoS) attacks and the technologies that have arisen to quell them.

These days, anyone can launch free Internet tools to overwhelm websites with billions of packets. Most operating systems have built-in anti-DoS attack protections, and more than a dozen vendors can protect your websites even when being hit by extraordinary amounts of bogus traffic. But the loss of pervasive anonymity will stop all malicious senders of DoS traffic. Once we can identify them, we can arrest them.

Think of it this way: Back in the 1920s there were a lot of rich and famous bank robbers. Banks finally beefed up their protection, and cops got better at identifying and arresting them. Robbers still hit banks, but they rarely get rich, and they almost always get caught, especially when they persist in robbing more banks. The same will happen to DoS senders. As soon as we can quickly identify them, the sooner they will disappear as the bothersome elements of society that they are.

Doomed security technology No. 9: Huge event logs

Computer security event monitoring and alerting is difficult. Every computer is easily capable of generating tens of thousands of events on its own each day. Collect them to a centralized logging database and pretty soon you’re talking petabytes of needed storage. Today’s event log management systems are often lauded for the vast size of their disk storage arrays.

The only problem: This sort of event logging doesn’t work. When nearly every collected event packet is worthless and goes unread, and the cumulative effect of all the worthless unread events is a huge storage cost, something has to give. Soon enough admins will require application and operating system vendors to give them more signal and less noise, by passing along useful events without the mundane log clutter. In other words, event log vendors will soon be bragging about how little space they take rather than how much.

Doomed security technology No. 10: Anonymity tools (not to mention anonymity and privacy)

Lastly, any mistaken vestige of anonymity and privacy will be completely wiped away. We already really don’t have it. The best book I can recommend on the subject is Bruce Schneier’s “Data and Goliath.” A quick read will scare you to death if you didn’t already realize how little privacy and anonymity you truly have.

Even hackers who think that hiding on Tor and other “darknets” give them some semblance of anonymity must understand how quickly the cops are arresting people doing bad things on those networks. Anonymous kingpin after anonymous kingpin ends up being arrested, identified in court, and serving real jail sentences with real jail numbers attached to their real identity.

The truth is, anonymity tools don’t work. Many companies, and certainly law enforcement, already know who you are. The only difference is that, in the future, everyone will know the score and stop pretending they are staying hidden and anonymous online.

I would love for a consumer’s bill of rights guaranteeing privacy to be created and passed, but past experience teaches me that too many citizens are more than willing to give up their right to privacy in return for supposed protection. How do I know? Because it’s already the standard everywhere but the Internet. You can bet the Internet is next.

 

MCTS Training, MCITP Trainnig

Best Microsoft MCTS Certification, Microsoft MCITP Training at certkingdom.com

Is the cloud the right spot for your big data?

Is the cloud a good spot for big data?

That’s a controversial question, and the answer changes depending on who you ask.

Last week I attended the HP Big Data Conference in Boston and both an HP customer and an executive told me that big data isn’t a good fit for the public cloud.

CB Bohn is a senior database engineer at Etsy, and a user of HP’s Vertica database. The online marketplace uses the public cloud for some workloads, but its primary functions are run out of a co-location center, Bohn said. It doesn’t make sense for the company to lift and shift its Postgres, Vertica SQL and Hadoop workloads into the public cloud, he said. It would be a massive undertaking for the company to port all the data associated with those programs into the cloud. Then, once its transferred to the cloud, the company would have to pay ongoing costs to store it there. Meanwhile, the company has a co-lo facility already set up and expertise in house to manage the infrastructure required to run those programs. The cloud just isn’t a good fit for Etsy’s big data, Bohn says.

Chris Selland, VP of Business Development at HP’s Big Data software division, says most of the company’s customers aren’t using the cloud in a substantial way with big data. Perhaps that’s because HP’s big data cloud, named Helion, isn’t quite as mature as say Amazon Web Services or Microsoft Azure. But still, Selland said there are both technical challenges (like data portability, and data latency) along with non-technical reasons, such as company executives being more comfortable with the data not being the cloud.

Bohn isn’t totally against the cloud though. For quick, large processing jobs the cloud is great. “Spikey” workloads that need fast access to large amounts of compute resources are ideal for the cloud. But, if an organization has a constant need for compute and storage resources, it can be more efficient to buy commodity hardware and run it yourself.

Public cloud vendors like Amazon Web Services make the opposite argument. I asked Amazon.com CTO Werner Vogels about private clouds recently and he argued that businesses should not waste time on building out data center infrastructure when AWS can supply it to them. Bohn argues that it’s cheaper to just buy the equipment than to rent it over the long-term.

As the public cloud has matured, it’s clear there’s still a debate about what workloads the cloud is good for and which it’s not.

The real answer to this question is that it depends on the business. For startup companies who were born in the cloud and have all their data in the cloud, it will make sense to do your data processing in the cloud. For companies that have big data center footprints, or co-location infrastructure set up, then there may not be a reason to lift and shift to the cloud. Each business will have its own specific use cases, some of which may be good for the cloud, and others which may not be.

 

MCTS Training, MCITP Trainnig

Best Microsoft MCTS Certification, Microsoft MCITP Training at certkingdom.com

The real dirt on programming certifications

Spotlight may be on Amazon, but tech jobs are high profit and high stress

It’s true. People working in Silicon Valley may cry at their desks, may be expected to respond to emails in the middle of the night and be in the office when they’d rather be sick in bed.

But that’s the price employees pay to work for some of the most successful and innovative tech companies in the world, according to industry analysts.

“It’s a pressure cooker for tech workers,” said Bill Reynolds, research director for Foote Partners LLC, an IT workforce research firm. “But for every disgruntled employee, someone will tell you it’s fine. This is the ticket to working in this area and they’re willing to pay it.”

The tech industry has been like this for years, he added.
Employees are either Type A personalities who thrive on the pressure, would rather focus on a project than get a full night’s sleep and don’t mind pushing or being pushed.

If that’s not who they are, they should get another job and probably in another industry.

“A lot of tech companies failed, and the ones that made it, made it based on a driven culture. No one made it working 9 to 5,” said John Challenger, CEO of Challenger, Gray & Christmas, an executive outplacement firm. “Silicon Valley has been the vanguard of this type of work culture. It can get out of control. It can be too much and people can burn out. But it’s who these companies are.”

Work culture at tech companies, specifically at Amazon, hit the spotlight earlier this week when the New York Times ran a story on the online retailer and what it called its “bruising workplace.”

The story talked about employees crying at their desks, working 80-plus-hour weeks and being expected to work when they’re not well or after a family tragedy.

“At Amazon, workers are encouraged to tear apart one another’s ideas in meetings, toil long and late (emails arrive past midnight, followed by text messages asking why they were not answered), and held to standards that the company boasts are “unreasonably high,” the article noted.

In response, Amazon.com CEO Jeff Bezos sent a memo to employees saying he didn’t recognize the company described in the Times article.

“The article doesn’t describe the Amazon I know or the caring Amazonians I work with every day,” Bezos wrote. “More broadly, I don’t think any company adopting the approach portrayed could survive, much less thrive, in today’s highly competitive tech hiring market.”

Bezos hasn’t been the only one at Amazon to respond. Nick Ciubotariu, head of Infrastructure development at Amazon.com, wrote a piece on LinkedIn, taking on the Times article.

“During my 18 months at Amazon, I’ve never worked a single weekend when I didn’t want to. No one tells me to work nights,” he wrote. “We work hard, and have fun. We have Nerf wars, almost daily, that often get a bit out of hand. We go out after work. We have ‘Fun Fridays.’ We banter, argue, play video games and Foosball. And we’re vocal about our employee happiness.”

Amazon has high expectations of its workers because it’s one of the largest and most successful companies in the world, according to industry analysts.

The company, which started as an online book store, now sells everything from cosmetics to bicycles and toasters. With a valuation of $250 billion, Amazon even surpassed mega retailer Walmart this summer as the biggest retailer in the U.S.

With that kind of success comes a lot of pressure to stay on top and to come up with new, innovative ways to keep customers happy.

That kind of challenge can lead to a stressful workplace where employees are called on to work long hours and to outwork competitors’ own employees.

It’s just the way of the beast, according to Victor Janulaitis, CEO of Janco Associates Inc., a management consulting firm.

“If you go to work for a high-powered company where you have a chance of being a millionaire in a few years, you are going to work 70 to 80 hours a week,” he said. “You are going to have to be right all the time and you are going to be under a lot of stress. Your regular Joe is really going to struggle there.”

This kind of work stress isn’t relegated to Amazon alone. Far from it, Janulaitis said.

“I think it’s fairly widespread in any tech company that is successful,” he noted. “It’s just a very stressful environment. You’re dealing with a lot of money and a lot of Type A personalities who want to get things done. If you’re not a certain type of person, you’re not going to make it. It’s much like the Wild West. They have their own rules.”

Of course, tech companies, whether Amazon, Google, Apple or Facebook, are known to work people hard, going back to the days when IBM was launching its first PCs and Microsoft was making its Office software ubiquitous around the world.

However, tech companies also are known for giving their employees perks that people working in other industries only dream of.

Google, for instance, has world-class chefs cooking free food for its employees, while also setting up nap pods, meditation classes and sandy volleyball courts.

Netflix recently made global headlines for offering mothers and fathers unlimited time off for up to a year after the birth or adoption of a child.

It’s the yin and yang of Silicon Valley, said Megan Slabinski, district president of Robert Half Technology, a human resources consulting firm.

“All those perks – the ping pong tables, the free snacks, the free day care — that started in the tech industry come with the job because the job is so demanding,” she said. “There’s a level of demand in the tech industry that translates to the work environment.”

When asked if Amazon is any harder on its employees than other major tech companies, Slabinski laughed.

“Amazon isn’t different culturally from other IT companies,” she said. “I’ve been doing this for 16 years. You see the good, the bad and the ugly. If you are working for tech companies, the expectation is you are going to work really hard. This is bleeding-edge technology, and the trade-off is there’s less work-life balance. The people who thrive in this industry, thrive on being on the bleeding edge. If you can’t take it, you go into another industry.”

Janulaitis noted that top-tier employees are always chased by other companies, but middle-tier workers – those who are doing a good job but might not be the brightest stars of the workforce – are hunkering down and staying put.

Fears of a still jittery job market have convinced a lot of people to keep their heads down, put up with whatever their managers ask of them and continue to be able to pay their mortgages, especially if they live in pricey Silicon Valley.

That, said Janulaitis, makes companies more apt to ask even more from their employees, who know they’re likely stuck where they are for now.

“Once the job market changes, turnover will increase significantly in the IT field,” he said.

Like stock traders working under extreme pressure on Wall Street or medical interns working 36-hour shifts, the tech industry is a high-stress environment – one that’s not suited to every worker.

“If you can’t live with that pressure, you should go somewhere else,” said Reynolds. “For people in Silicon Valley, it’s who they are. It’s the kind of person they are.”


MCTS Training, MCITP Trainnig

Best Microsoft MCTS Certification, Microsoft MCITP Training at certkingdom.com

 

Microsoft’s rollout of Windows 10 gets B+ grade

General vibe of the new OS remains positive, say analysts

Microsoft has done a good job rolling out Windows 10 in the first two weeks, analysts said today, and the general vibe for Windows 8’s replacement has been positive, even though glitches have dampened some enthusiasm.

“If I had to give Microsoft a letter grade, it would be a B or a B+,” said Steve Kleynhans of Gartner. “It’s not an A because it hasn’t gone perfectly. They’ve stubbed their toe over privacy issues, for example.”

Microsoft began serving up the free Windows 10 upgrade late on July 28, giving participants in the firm’s Insider preview program first shot at the production code. It then slowly began triggering upgrade notices on Windows 7 and 8.1 machines whose owners had earlier “reserved” copies through an on-device app planted on their devices this spring.

The Redmond, Wash. company has said little of the rollout’s performance other than to tout that 14 million systems were running Windows 10 within 24 hours of its debut.

Estimates based on user share data from U.S. analytics company Net Applications, however, suggests that by Aug. 8, some 45 million PCs were powered by Windows 10.

Analysts largely applauded the launch. “As far as the roll-out, it’s not any worse than any other Windows,” said Kleynhans. “But it’s all happening at this compressed timetable.

“And social media now amplifies any problems,” he continued, much more so than three years ago when Windows 8 released, much less in 2009, when Microsoft last had a hit on its hands.

Others were more bullish on Microsoft’s performance. “Windows 10’s go-to-market was really quite good,” said Wes Miller of Directions on Microsoft, a research firm that specializes in tracking the company’s moves.

Miller was especially impressed with Microsoft’s ability to make customers covet the upgrade. “Something Microsoft has not always done a great job of is creating a sense of exclusivity,” said Miller. “But they’re withholding [the upgrade] just enough that there’s a sense of excitement. People are saying, ‘I want it, I’m not getting the upgrade yet.’ Arguably, that exactly what Microsoft wants.”

Windows 10’s rollout has departed from those of past editions in significant ways.
Historically, Microsoft released a new Windows to its OEM (original equipment manufacturer) partners first, who were given months to prepare new devices pre-loaded with the operating system. Only when the computer makers were ready did Microsoft deliver paid upgrades to customers who wanted to refresh their current hardware. Relatively few users paid for the upgrades; most preferred to purchase a new PC with the new OS already installed.

This cycle, Microsoft gave away the Windows 10 upgrade to hundreds of millions of customers — those running a Home or Pro/Professional edition of Windows 7 or Windows 8.1 — to jumpstart the new OS’s adoption. With some exceptions, the upgrade hit before OEMs had prepared new devices or seeded them to retail.

Because of the large number of customers eligible for the free upgrade, Microsoft announced it would distribute the code in several waves that would take weeks (according to Microsoft) or months (the consensus of analysts) to complete. While some had predicted that the upgrade’s massive audience would stress the delivery system Microsoft had built, or even affect the Internet at large, neither happened.

The “Get Windows 10” app — which was silently placed on PCs beginning in March — not only served as a way to queue customers for the upgrade, but also ran compatibility checks to ensure the hardware and software would support the new operating system, another slick move by Microsoft.

“Microsoft rolled out Windows 10 to the audience that would be most receptive,” said Patrick Moorhead, principal analyst at Moor Insights & Strategy, referring to the Insiders-get-it-first tactic. “Then they rolled it out to those who weren’t Insiders, but who had expressed a desire to get the upgrade. And only those [whose devices] passed all of its tests got it. That was a smart thing to do.”

The latter was designed to limit upgrade snafus, something Microsoft has chiefly, although not entirely, accomplished. “While the rollout was pretty clean, there have been glitchy issues here and there,” said Kleynhans, who cited post-Windows-10-upgrade updates that crippled some consumers’ machines.

Moorhead echoed that, highlighting the out-the-gate problem many had keeping Nvidia’s graphic drivers up-to-date as Microsoft’s and Nvidia’s update services tussled over which got to install a driver. “Problems have been more anecdotal than system-wide,” Moorhead said. “And they seem to get remedied very quickly.”

The bungles haven’t been widespread enough to taint the generally favorable impression of Windows 10 generated by social media, news reports and Microsoft’s PR machine, the analysts argued.

“Overall, I’d say Windows 10 has received a much more positive reception than other [editions of] Windows,” said Moorhead, who said the reaction was justified, since the developing consensus is that Windows 10 is a big improvement over its flop-of-a-predecessor, Windows 8.

“The vibe is positive, but it’s much more about consumers now than businesses,” said Directions’ Miller. Enterprises, he said, will take a wait-and-see approach — as they always do — before jumping onto Windows 10, as they must if they’re to stick with Microsoft, a given since there isn’t a viable alternative.

A credible reaction from corporate customers, Miller continued, won’t be visible until Microsoft finishes unveiling its update tracks, called “branches,” particularly the “Long-term servicing branch” (LTSB). That branch will mimic the traditional servicing model where new features and functionality will be blocked from reaching systems that businesses don’t want to see constantly changing.

“People are liking what they are getting out of the other end” of the upgrade, added Kleynhans. “From what I’ve heard, they’re happy, surprisingly happy, and generally pretty positive about the OS. But I’d expect the new shine to wear off after the first couple of weeks.”


MCTS Training, MCITP Trainnig

Best Microsoft MCTS Certification, Microsoft MCITP Training at certkingdom.com

 

 

 

Sorriest technology companies of 2015

A rundown of the year in apologies from tech vendors and those whose businesses rely heavily on tech.

Sorry situation
Despite all the technology advances that have rolled out this year, it’s also been a sorry state of affairs among leading network and computing vendors, along with businesses that rely heavily on technology. Apple, Google, airlines and more have issued tech-related mea culpas in 2015…

Sony says Sorry by saying Thanks
Network outages caused by DDoS attacks spoiled holiday fun for those who got new PlayStation 4 games and consoles, so Sony kicked off 2015 with an offer of 10% off new purchases, plus an extended free trial for some.

NSA’s backdoor apology
After getting outted by Microsoft and later Edward Snowden for allowing backdoors to be inserted into devices via a key security standard, the NSA sort of apologized. NSA Director of Research Michael Wertheimer, in writing for the Notices of the American Mathematical Society, acknowledges mistakes were made in “The Mathematics Community and the NSA.” He wrote in part: “With hindsight, NSA should have ceased supporting the Dual_EC_DRBG algorithm immediately after security researchers discovered the potential for a trapdoor.”

You probably forgot about this flag controversy
China’s big WeChat messaging service apologized in January for bombarding many of its hundreds of millions of users – and not just those in the United States — with Stars and Stripes icons whenever they typed in the words “civil rights” on Martin Luther King, Jr. Day. WeChat also took heat for not offering any sort of special icons when users typed in patriotic Chinese terms. The special flag icons were only supposed to have been seen by US users of the service.

Go Daddy crosses the line
Web site domain provider Go Daddy as usual relied on scantily clad women as well as animals to spread its message during this past winter’s Super Bowl. The surprising thing is that the animals are what got the company in hot water this time. The company previewed an ad that was supposed to parody Budweiser commercials, but its puppy mill punch line didn’t have many people laughing, so the CEO wound up apologizing and pulling the ad.

Name calling at Comcast
Comcast scrambled to make right after somehow changing the name of a customer on his bill to “(expletive… rhymes with North Pole) Brown” from his actual name, Ricardo Brown. The change took place after Brown’s wife called Comcast to discontinue cable service. The service provider told a USA Today columnist that it was investigating the matter, but in the meantime was refunding the Browns for two years of previous service.

Where to start with Google?
Google’s Department of Apologies has been busy this year: In January the company apologized when its translation services spit out anti-gay slurs in response to searches on the terms “gay” and “homosexual.” In May, Google apologized after a Maps user embedded an image of the Android mascot urinating on Apple’s logo. This summer, Google has apologized for its new Photos app mislabeling African Americans as “gorillas” and for Google Niantic Labs’ Ingress augmented reality game including the sites of former Nazi concentration camps as points of interest.

Carnegie Mellon admissions SNAFU
Carnegie Mellon University’s Computer Science School in February apologized after it mistakenly accepted 800 applicants to its grad problem, only to send out rejection notices hours later. The irony of a computer glitch leading to this problem at such a renowned computer science school was lost on no one…

Lenovo Superfish debacle
Lenovo officials apologized in February after it was discovered that Superfish adware packaged with some of its consumer notebooks was not only a pain for users but also included a serious security flaw resulting from interception of encrypted traffic. “I have a bunch of very embarrassed engineers on my staff right now,” said Lenovo CTO Peter Hortensius. “They missed this.” Lenovo worked with Microsoft and others to give users tools to rid themselves of Superfish.

Apple apologizes for tuning out customers
Apple apologized in March for an 11-hour iTunes service and App Store outage that it blamed on “an internal DNS error at Apple,” in a statement to CNBC.

Blame the iPads
American Airlines in April apologized after digital map application problems on pilot iPads delayed dozens of flights over a two-day period. The airline did stress that the problem was a third-party app, not the Apple products themselves.

Locker awakened
The creator of a strain of ransomware called Locker apologized after he “woke up” the malware, which encrypted files on infected devices and asked for money to release them. A week after the ransomware was activated, the creator apparently had a changed of heart released decryption keys needed by victims to unlock their systems.

HTC wants to be Hero
Phonemaker HTC’s CEO Cher Wang, according to the Taipei Times in June, apologized to investors in June after the company’s new One M9 flagship phone failed to boost sales. “HTC’s recent performance has let people down,” said Wang, pointing to better times ahead with the planned fall release of a new phone dubbed Hero.

Ketchup for adults only
Ketchup maker Heinz apologized in June after an outdated contest-related QR code on its bottles sent a German man to an X-rated website. Meanwhile, the website operator offered the man who complained a free year’s worth of access, which he declined.

Livid Reddit users push out interim CEO
Interim Reddit CEO Ellen Pao apologized in July (“we screwed up”) after the online news aggregation site went nuts over the sudden dismissal of an influential employee known for her work on the site’s popular Ask Me Anything section. Pao shortly afterwards resigned from her post following continued demands for her ouster by site users.

Blame the router
United Airlines apologized (“we experienced a network connectivity issue. We are working to resolve and apologize for any inconvenience.”) in July after being forced to ground its flights for two hours one morning due to a technology issue that turned out to be router-related. United has suffered a string of tech glitches since adopting Continental’s passenger management system a few years back following its acquisition of the airline.

Billion dollar apology
Top Toshiba executives resigned in July following revelations that the company had systematically padded its profits by more than $1 billion over a six-year period. “I recognize there has been the most serious damage to our brand image in our 140-year history,” said outgoing President Hisao Tanaka, who is to be succeeded by Chairman Masashi Muromachi. “We take what the committee has pointed out very seriously, and it is I and others in management who bear responsibility.”


MCTS Training, MCITP Trainnig

Best Microsoft MCTS Certification, Microsoft MCITP Training at certkingdom.com

 

 

Ultimate guide to Raspberry Pi operating systems, part 1

Raspberry Pi
Since we published a roundup of 10 Raspberry Pi operating systems the number of choices has exploded. In this piece I’m including every option I could find (and for you pickers of nits, yes, I’m counting individual Linux distros as individual operating systems, so sue me). If you know of anything I’ve missed or a detail that’s wrong, please drop me a note at feedback@gibbs.com and I’ll update the piece and give you a shout out.

Want to know immediately when the next installment of this guide is published? Sign up and you’ll be the first to know.

Now on with the awesomeness …

Adafruit – Occidentalis v0.3
Occidentalis v0.3 is the result of running Adafruit’s Pi Bootstrapper on a Raspbian installation to build a platform for teaching electronics using the Raspberry Pi. Arguably not a true distro (the previous versions were) it’s included because it’s kind of cool.

Arch Linux ARM
Arch Linux ARM is a fork of Arch Linux built for ARM processors. This distro has a long history of being used in a wide range of products, including the Pogoplug as well as the Raspberry Pi. It’s known for being both fast and stable. There is no default desktop but above, I show the option of Openbox.

BerryTerminal
BerryTerminal has not been updated for several years: “BerryTerminal is a minimal Linux distribution designed to turn the Raspberry Pi mini computer into a low-cost thin client. It allows users to login to a central Edubuntu or other [Linux Terminal Server Project] server, and run applications on the central server.”

DarkELEC
DarkELEC: “None of the currently available solutions do a perfect job running XBMC on the Pi, however OpenELEC comes by far the closest, in spite of its locked down nature. [The DarkELEC] fork aims to remedy the very few flaws in its implementation and to focus 100% on the Pi, while also sticking to the upstream and incorporating its updates.”

Debian 8 (“Jessie”)
Debian 8 (“Jessie”) is the latest and greatest version of Debian and Sjoerd Simons of Collabora appears to be the first person to get it running on the Raspberry Pi 2 back in February this year. As of this writing, there isn’t an “official”release of Debian 8 for the Raspberry Pi so, if you go down this path, expect a few bumps (and complexities) on the way.

DietPi
DietPi: “At its core, DietPi is the go to image for a minimal Raspbian/Debian Server install. We’ve stripped down and removed everything from the official Raspbian image to give us a bare minimal Raspbian server image that we call DietPi-Core.” DietPi is optimized for all Pi models and has a 120MB compressed image, fits on a 1GB or greater SD card, has only 11 running processes after boot, requires just 16MB of memory after boot, and, “unlike most Raspbian minimal images, ours includes full Wifi support.” An LXDE desktop is optional.

Fedora Remix (Pidora)
Fedora Remix (Pidora): Pidora is a Fedora Remix, a customized version of the Unix-like Fedora system, running on the ARM-based Raspberry Pi single board computer and it moves faster than a politician taking a donation. First released in 2003 Fedora has a long history and is noted for its stability. Given that there are thousands of packages available in the Pidora repository you’ll be able to find pretty much any functionality or service you need for your project.

GeeXboX ARM
GeeXboX ARM is a free and Open Source Media Center Linux distribution for embedded devices and desktop computers. GeeXboX is not an application, it’s a full-featured OS that can be booted from a LiveCD, from a USB key, an SD/MMC card or installed on an HDD. The core media delivery application os XBMC Media Center 12.2 “Frodo”.

IPFire
IPFire is a specialized version of Linux that operates as a firewall. Designed to be highly secure and fast, it’s managed through a Web-based interface.

Kali Linux
Kali Linux is one of my favorite flavors of Linux because of its excellent collection of penetration testing and diagnostic tools (plus it has a great logo). Being able to run this bad boy on a Raspberry Pi means you can have your own custom pen tester in your pocket.

Lessbian 8.1 (“Raptor”)
Lessbian 8.1 (“Raptor”): A stripped down bare minimal Debian “Jessie”. The goal of Lessbian is to “provide a small and fast jessie image for servers and wifi security testing without the madness of system.” This release is described as “A bootable wifi system optimized for throughput, performance, and encryption”and it’s a great platform for running a Tor Relay.

Minepeon
Minepeon: There’s gold in them thar’ BitCoin mines! You can get it out using the Minepeon operating system based on Linux and running on a Raspberry Pi. Of course you’re going to need a lot of machines to get your digital “quan”given how much more “work”is needed to mine BitCoin today, but given the price of the Raspberry Pi you won’t go broke assembling a roomful of miners. Show me the digital money!

Moebius
Moebius: A minimal ARM HF distribution that needs just 20Mb of RAM for the entire operating system and fits on a 128MB SD card. Version 2 is current stable version. An LXDE desktop is optional.

nOS
nOS: Based on Ubuntu and the KDE, this distro has been abandoned: “Development of nOS has stopped, existing versions will continue to work and receive updates from the package manufacturers until April 2019. The only things that will no longer be issued are updates for nOS specific software and the monthly image releases (they haven’t been going for a while anyway).”

OpenELEC
OpenELEC, an acronym for Open Embedded Linux Entertainment Center, is a Linux-based OS that runs the popular XBMC open source digital media center software. The first release of OpenELEC was in 2013 and, according to the OpenELEC Wiki, “Installing OpenELEC for Raspberry Pi from a Linux computer is a very simple process and whether you’re new to Linux or a hardened *NIX user, you shouldn’t have any problems.”

OpenWrt for Raspberry Pi
OpenWrt for Raspberry Pi is “a Linux distribution for embedded devices.” Systems based on OpenWrt are most often used as routers and, with something like 3,500 optional add-on packages, its features can be tailored in pretty much anyway imaginable. Want an ultraportable, incredibly tiny wireless router that can be run anywhere? OpenWrt on a Raspberry Pi running off a battery with a USB WiFi dongle can only be described as “epic.”

Raspberry Digital Signage
Raspberry Digital Signage is based on Debian Linux running on a Raspberry Pi and used in Web kiosks and digital signage (including digital photo frames). A really well thought out system, Digital Signage is designed to be easily administered while being as “hacker-proof”as possible.

Raspberry Pi Thin Client
Raspberry Pi Thin Client: Creates a very low price thin client that supports Microsoft RDC, Citrix ICA, VMWare View, OpenNX and SPICE.

Raspbian Pisces R3
Raspbian Pisces R3: Another non-official distro, Raspbian Pisces created by Mike Thompson, is an SD image of Raspbian and creates a minimal Debian installation with the LXDE desktop.

Raspbian Server Edition
Raspbian Server Edition: A stripped-down version of Raspbian with some extra packages that boots to a command prompt. It is an excellent tool to use for testing hard float compilations and running benchmarks.

Raspbmc
Raspbmc: Yet another distro that is designed for the popular XBMC open source digital media center, Raspbmc is lightweight and robust.

RaspEX (Edition 150706)
RaspEX (Edition 150706): RaspEX is a full Linux desktop system with LXDE and many other useful programs pre-installed. Chromium is used as Web Browser and Synaptic as Package Manager. RaspEX uses Ubuntu’s software repositories so you can install thousands of extra packages if you want.

Raspian Debian 7.8 (“Wheezy”)
Raspian Debian 7.8 (“Wheezy”): The Raspian Debian “Wheezy”distro for the Raspberry Pi is a fully functional Debian Wheezy installation containing the LXDE desktop, the Epiphany browser, Wolfram Mathematica, and Scratch. It supports the Raspberry Pi and the Raspberry Pi 2 and is the current Debian version supported by the Raspberry Pi Foundation.

Red Sleeve Linux
Red Sleeve Linux: “RedSleeve Linux is a 3rd party ARM port of a Linux distribution of a Prominent North American Enterprise Linux Vendor (PNAELV). They object to being referred to by name in the context of clones and ports of their distribution, but if you are aware of CentOS and Scientific Linux, you can probably guess what RedSleeve is based on. RedSleeve is different from CentOS and Scientific Linux in that it isn’t a mere clone of the upstream distribution it is based on –it is a port to a new platform, since the upstream distribution does not include a version for ARM.”

RISC OS Pi
RISC OS Pi: Originally developed and released 1987 by UK-based Acorn Computers Ltd. RISC OS is, as the RISC OS Web site claims, “its own thing –a very specialized ARM-based operating system… if you’ve not used it before, you will find it doesn’t behave quite the same way as anything else.”. RISC OS Pi has been available on the Raspberry Pi since 2012.

SliTaz GNU/Linux Raspberry Pi
The SliTaz GNU/Linux Raspberry Pi distribution is “a small operating system for a small computer! The goal is to provide a fast, minimal footprint and optimized distro for the Raspberry Pi. You can setup a wide range of system types, from servers to desktops and learning platforms.”

Windows 10 IoT Core Edition
Windows 10 IoT Core Edition’s GUI stack is limited to Microsoft’s Universal App Platform so there’s no Windows desktop or even a command prompt. With PowerShell remoting you get a PowerShell terminal from which you can run Windows commands and see the output of native Win32 apps. Currently available as a preview version, there’s no support for Wi-Fi or Bluetooth.

outro
In our next installment of Network World’s Ultimate Guide to Raspberry Pi Operating Systems we’ll be covering a whole new collection: Bodhi, Commodore Pi, FreeBSD, Gentoo, ha-pi, I2Pberry, Kano OS, MINIBIAN, motionPie, Nard, NetBSD, OSMC, PiBang Linux, PiBox, PiMAME, PiParted, Plan 9, PwnPi, RasPlex, Slackware ARM, SlaXBMCRPi, slrpi, Tiny Core Linux, Ubuntu, Volumio, XBian, and more.

Want to know immediately when the next installment is published? Sign up and you’ll be the first to know.
Want more Pi? Check out 10 Reasons why the Raspberry Pi 2 Model B is a killer product and MIPS Creator CI20: Sort of a challenge to the Raspberry Pi 2 Model B. What could be the next RPi? Check out Endless: A computer the rest of the world can afford and How low can we go? Introducing the $9 Linux computer!


MCTS Training, MCITP Trainnig

Best Microsoft MCTS Certification, Microsoft MCITP Training at certkingdom.com

Microsoft buys sales-gamification startup with eye to CRM combo

Microsoft has acquired Incent Games and plans to integrate the Texas startup’s FantasySalesTeam sales-gamification software into Dynamics CRM.

Terms of the deal were not disclosed.

Adding the fantasy sports component to its CRM offering will give companies a tool to make incentive programs for sales staff more engaging, according to Bob Stutz, corporate vice president for Microsoft Dynamics CRM, in a who discussed the news in a blog post.

Microsoft will integrate the platform into its own Dynamics CRM software in the coming months, Stutz said. It will also continue to support customers using FantasySalesTeam with other CRM products.

However, the move drew some derisive commentary from at least one analyst.

“Are they kidding?” said Denis Pombriant, managing principal at Beagle Research Group, via email. “Let’s see, for many years and even centuries, we have incentivized sales people with money (the carrot) and job loss (the stick). That wasn’t enough? Really?”

The real problem with incentives is the difficulty in individualizing and applying them across a product line that contains more than one product, and that can’t be solved with gamification, Pombriant said.

Rather, it’s a big data problem, he suggested, and it can be solved by comprehensive compensation-management systems such as what’s offered by companies like Xactly and Callidus.

“We spend all kinds of effort and resources trying to squeeze more productivity out of sales reps,” Pombriant said. “It makes little sense to me to introduce a game system that takes their attention away from the business at hand rather than pursuing results.”


MCTS Training, MCITP Trainnig

Best Microsoft MCTS Certification, Microsoft MCITP Training at certkingdom.com

Why you need to care more about DNS

There’s one key part of your network infrastructure that you’re probably not monitoring, even though it keeps you connected, can tell you a lot about what’s happening inside your business – and is an increasing source of attacks. DNS isn’t just for domain names any more.

When you say Domain Name System (DNS), you might think, naturally enough, of domain names and the technical details of running your Internet connection. You might be concerned about denial of service attacks on your website, or someone hijacking and defacing it.

While those certainly matter, DNS isn’t just for looking up Web URLs any more; it’s used by software to check licences, by video services to get around firewalls and, all too often, by hackers stealing data out from your business. Plus, your employees may be gaily adding free DNS services to their devices that, at the very least, mean you’re not in full control of your network configuration. It’s a fundamental part of your infrastructure that’s key to business productivity, as well as a major avenue of attack, and you probably have very little idea of what’s going on.

6 simple tricks for protecting your passwords

DNS is the most ubiquitous protocol on the Internet, but it’s also probably the most ignored. Data Leak Protection (DLP) systems that check protocols used by email, web browsers, peer-to-peer software and even Tor, often neglect DNS. “Nobody looks much at DNS packets, even though DNS underlies everything,” says Cloudmark CTO Neil Cook. “There’s a lot of DLP done on web and email but DNS is sitting there, wide open.”

Data lost in the Sally Beauty breach last year was exfiltrated in packets disguised as DNS queries, but Cook points out some unexpected though legitimate uses; “Sophos uses DNS tunnelling to get signatures; we even use it for licensing.”

A number of vendors are starting to offer DNS tools, from Infoblox’s appliances to OpenDNS’ secure DNS service; Palo Alto Networks is starting to offer DNS inspection services, U.K. domain registry Nominet has just launched its Turing DNS visualisation tool to help businesses spot anomalies in their DNS traffic, and Cloudmark analyzes patterns of DNS behavior to help detect links in email going to sites that host malware. There are also any number of plugins for common monitoring tools that will give you basic visibility of what’s going on.

Few businesses do any monitoring of their DNS traffic despite it being the source of many attacks. It’s not just the malware that runs on Point of Sale systems, capturing customer credit cards in attacks like those on Sally Beauty, Home Depot and Target, that uses DNS tunnelling. DNS is the most ubiquitous command and control channel for malware, as well as being used to get data stolen by malware from your business.

“DNS is frequently used as a conduit to surreptitiously tunnel data in and out of the company,” says Cricket Liu, the chief DNS architect at Infoblox, “and the reason people who write malware are using DNS to tunnel out this traffic is because it’s so poorly monitored, most people have no idea what kind of queries are going over their DNS infrastructure.”

There’s also the problem of people using DNS to bypass network security controls; that might be employees avoiding network restrictions, security policies or content filtering, or it might be attackers avoiding detection.

DNS attacks are a widespread problem
In a recent Vanson Bourne study of U.S. and U.K. businesses, 75 percent said they’d suffered a DNS attack (including denial of service and DNS hijacking as well as data theft through DNS), with 49 percent having experienced an attack during 2014. Worryingly, 44 percent said it was hard to justify investments in DNS security because senior management didn’t recognize the issue.

That’s because they think of DNS as a utility, suggests Nominet CTO Simon McCalla. “For most CIOs, DNS is something that happens in the background and isn’t a high priority for them. As long as it works, they’re happy. However, what most of them don’t realize is that there is a wealth of information inside their DNS that tells them what is going on within their network internally.”

Liu is blunter: “I’m surprised how few organizations bother to do any kind of monitoring of their DNS infrastructure. DNS doesn’t get any respect, yet TCP/IP networks don’t work without DNS; it’s the unrecognized lynch pin.” Liu insists “it’s not rocket science to put in monitoring of your DNS infrastructure; there are lots of mechanisms out there for understanding what queries DNS servers are handling and their responses. And you really ought to be doing because this infrastructure is no less critical than the routing and switching infrastructure that actually moves packets across your network.”

Usually, he finds demonstrating the threat is enough to get management attention. “Most CIOs – once they see how with one compromised machine on the inside of a network you can set up a bi-directional channel between that endpoint and a server on the internet – realize they need to do something about this. It’s just a matter of being faced with that cold hard reality.”

Tackling DNS security

First, you need to stop thinking about DNS as being about networking and just “part of the plumbing,” says David Ulevitch, the CEO of OpenDNS (which Cisco is in the process of acquiring).

“It used to be network operators who ran your DNS, and they were looking at it in terms of making sure the firewall was open, and not blocking what they viewed as a critical element of connectivity as opposed to a key component of security policy, access control and auditing. But we live in a world today where every network operator has to be a security practitioner.”

If you actively manage your DNS, you can apply network controls at a level employees (and attackers) can’t work around. You can detect phishing attacks and malware command and control more efficiently at the DNS layer than using a web proxy or doing deep packet inspection, and you can detect it as it happens rather than days later.

“DNS is a very good early warning system,” says Liu. “You can pretty much at this point assume you have infected devices on your network. DNS is a good place to set up little tripwires, so when malware and other malicious software gets on your network, you can easily detect its presence and its activity, and you can do some things to minimize the damage it does.” You could even see how widespread the infection is, by looking for similar patterns of behaviour.

Services like OpenDNS and Infoblox can also look across more than your network. “It’s easy to build a baseline of what normal looks like and do anomaly detection”, says Ulevitch. “Suppose you’re an oil and gas business in Texas and a new domain name pops up in China pointing to an IP address in Europe, and no other oil company is looking at this domain. Why should you be the guinea pig?”

You also need to monitor how common addresses are resolved on your network – hackers can try to send links to sites like Paypal to their own malicious sites – and where your external domain points to. When Tesla’s website was recently redirected to a spoof page put up by hackers, who also took control of the company’s Twitter account (and used it to flood a small computer repair store in Illinois with calls from people they’d fooled into believing they’d won free cars), the attackers also changed the name servers used to resolve the domain name. Monitoring their DNS might have given Tesla a heads-up that something was wrong before users started tweeting pictures of the hacked site.

At the very least, remember that DNS underpins all your online services, Ulevitch points out. “The bar is very low for improving DNS. Usually, DNS is seen as a cost enter; people don’t invest in reliable enough infrastructure or high enough performance equipment so it’s hard to cope with a high volume of transactions.”

That doesn’t only matter if you’re targeted by a DNS attack. “Organizations should look at DNS performance because it will have a material impact on everything you do online. Every time you send an email or open an app you’re doing DNS requests. These days, web pages are very complex and it’s not uncommon to have more than 10 DNS requests to load a page. That can be a whole extra second or more, just to handle the DNS components of loading a page.”
Tracking business behavior

Monitoring DNS can also give you a lot of information about what’s going on across your business far beyond the network. “We live in a world where the network perimeter is becoming ephemeral and where services are easy to adopt,” Ulevitch points out. “A marketing executive can sign up to Salesforce; if you’re looking at the DNS you can see that. You can see how many employees are using Facebook. You can see devices showing up in your network, whether it’s because they’re checking a licence or doing data exfiltration. If you have a hundred offices, you can still see who is connecting devices.”

That’s not just PCs either, he points out; printers and televisions and IoT devices are increasingly connecting to your business network. “Do I want my TVs phoning home? If you look at the Samsung privacy policy, it says the TV has a microphone that might be listening at any time; do I really want that in the corporate boardroom? Maybe I want to apply DNS policies so my TVs can’t phone home.”

Infoblox’s Liu agrees. “IoT devices are often not designed with a lot of security in mind. You want to make sure devices are connecting where they should be and that if someone throws something else onto your IoT network they can’t access your internal network. DNS is a useful place to monitor and control that access.”

And because you’re already using DNS, monitoring it isn’t disruptive, Ulevitch points out. “Usually in security, the reason most things aren’t used is the effort needed to make sure they don’t have a detrimental effect on user performance.”

In fact, you need a good reason not to be doing this, he says. “There are fundamental best practices in security and one of them is network visibility. Not being able to see the traffic on your network means you’re flying blind. Finding a way to inspect DNS traffic is a fundamental requirement of a strong security posture. To not know what’s happening on your network is borderline derelict.”


MCTS Training, MCITP Trainnig

Best Microsoft MCTS Certification, Microsoft MCITP Training at certkingdom.com