Archive for the ‘Tech’ Category

Is the cloud the right spot for your big data?

Is the cloud a good spot for big data?

That’s a controversial question, and the answer changes depending on who you ask.

Last week I attended the HP Big Data Conference in Boston and both an HP customer and an executive told me that big data isn’t a good fit for the public cloud.

CB Bohn is a senior database engineer at Etsy, and a user of HP’s Vertica database. The online marketplace uses the public cloud for some workloads, but its primary functions are run out of a co-location center, Bohn said. It doesn’t make sense for the company to lift and shift its Postgres, Vertica SQL and Hadoop workloads into the public cloud, he said. It would be a massive undertaking for the company to port all the data associated with those programs into the cloud. Then, once its transferred to the cloud, the company would have to pay ongoing costs to store it there. Meanwhile, the company has a co-lo facility already set up and expertise in house to manage the infrastructure required to run those programs. The cloud just isn’t a good fit for Etsy’s big data, Bohn says.

Chris Selland, VP of Business Development at HP’s Big Data software division, says most of the company’s customers aren’t using the cloud in a substantial way with big data. Perhaps that’s because HP’s big data cloud, named Helion, isn’t quite as mature as say Amazon Web Services or Microsoft Azure. But still, Selland said there are both technical challenges (like data portability, and data latency) along with non-technical reasons, such as company executives being more comfortable with the data not being the cloud.

Bohn isn’t totally against the cloud though. For quick, large processing jobs the cloud is great. “Spikey” workloads that need fast access to large amounts of compute resources are ideal for the cloud. But, if an organization has a constant need for compute and storage resources, it can be more efficient to buy commodity hardware and run it yourself.

Public cloud vendors like Amazon Web Services make the opposite argument. I asked CTO Werner Vogels about private clouds recently and he argued that businesses should not waste time on building out data center infrastructure when AWS can supply it to them. Bohn argues that it’s cheaper to just buy the equipment than to rent it over the long-term.

As the public cloud has matured, it’s clear there’s still a debate about what workloads the cloud is good for and which it’s not.

The real answer to this question is that it depends on the business. For startup companies who were born in the cloud and have all their data in the cloud, it will make sense to do your data processing in the cloud. For companies that have big data center footprints, or co-location infrastructure set up, then there may not be a reason to lift and shift to the cloud. Each business will have its own specific use cases, some of which may be good for the cloud, and others which may not be.


MCTS Training, MCITP Trainnig

Best Microsoft MCTS Certification, Microsoft MCITP Training at

The real dirt on programming certifications

Spotlight may be on Amazon, but tech jobs are high profit and high stress

It’s true. People working in Silicon Valley may cry at their desks, may be expected to respond to emails in the middle of the night and be in the office when they’d rather be sick in bed.

But that’s the price employees pay to work for some of the most successful and innovative tech companies in the world, according to industry analysts.

“It’s a pressure cooker for tech workers,” said Bill Reynolds, research director for Foote Partners LLC, an IT workforce research firm. “But for every disgruntled employee, someone will tell you it’s fine. This is the ticket to working in this area and they’re willing to pay it.”

The tech industry has been like this for years, he added.
Employees are either Type A personalities who thrive on the pressure, would rather focus on a project than get a full night’s sleep and don’t mind pushing or being pushed.

If that’s not who they are, they should get another job and probably in another industry.

“A lot of tech companies failed, and the ones that made it, made it based on a driven culture. No one made it working 9 to 5,” said John Challenger, CEO of Challenger, Gray & Christmas, an executive outplacement firm. “Silicon Valley has been the vanguard of this type of work culture. It can get out of control. It can be too much and people can burn out. But it’s who these companies are.”

Work culture at tech companies, specifically at Amazon, hit the spotlight earlier this week when the New York Times ran a story on the online retailer and what it called its “bruising workplace.”

The story talked about employees crying at their desks, working 80-plus-hour weeks and being expected to work when they’re not well or after a family tragedy.

“At Amazon, workers are encouraged to tear apart one another’s ideas in meetings, toil long and late (emails arrive past midnight, followed by text messages asking why they were not answered), and held to standards that the company boasts are “unreasonably high,” the article noted.

In response, CEO Jeff Bezos sent a memo to employees saying he didn’t recognize the company described in the Times article.

“The article doesn’t describe the Amazon I know or the caring Amazonians I work with every day,” Bezos wrote. “More broadly, I don’t think any company adopting the approach portrayed could survive, much less thrive, in today’s highly competitive tech hiring market.”

Bezos hasn’t been the only one at Amazon to respond. Nick Ciubotariu, head of Infrastructure development at, wrote a piece on LinkedIn, taking on the Times article.

“During my 18 months at Amazon, I’ve never worked a single weekend when I didn’t want to. No one tells me to work nights,” he wrote. “We work hard, and have fun. We have Nerf wars, almost daily, that often get a bit out of hand. We go out after work. We have ‘Fun Fridays.’ We banter, argue, play video games and Foosball. And we’re vocal about our employee happiness.”

Amazon has high expectations of its workers because it’s one of the largest and most successful companies in the world, according to industry analysts.

The company, which started as an online book store, now sells everything from cosmetics to bicycles and toasters. With a valuation of $250 billion, Amazon even surpassed mega retailer Walmart this summer as the biggest retailer in the U.S.

With that kind of success comes a lot of pressure to stay on top and to come up with new, innovative ways to keep customers happy.

That kind of challenge can lead to a stressful workplace where employees are called on to work long hours and to outwork competitors’ own employees.

It’s just the way of the beast, according to Victor Janulaitis, CEO of Janco Associates Inc., a management consulting firm.

“If you go to work for a high-powered company where you have a chance of being a millionaire in a few years, you are going to work 70 to 80 hours a week,” he said. “You are going to have to be right all the time and you are going to be under a lot of stress. Your regular Joe is really going to struggle there.”

This kind of work stress isn’t relegated to Amazon alone. Far from it, Janulaitis said.

“I think it’s fairly widespread in any tech company that is successful,” he noted. “It’s just a very stressful environment. You’re dealing with a lot of money and a lot of Type A personalities who want to get things done. If you’re not a certain type of person, you’re not going to make it. It’s much like the Wild West. They have their own rules.”

Of course, tech companies, whether Amazon, Google, Apple or Facebook, are known to work people hard, going back to the days when IBM was launching its first PCs and Microsoft was making its Office software ubiquitous around the world.

However, tech companies also are known for giving their employees perks that people working in other industries only dream of.

Google, for instance, has world-class chefs cooking free food for its employees, while also setting up nap pods, meditation classes and sandy volleyball courts.

Netflix recently made global headlines for offering mothers and fathers unlimited time off for up to a year after the birth or adoption of a child.

It’s the yin and yang of Silicon Valley, said Megan Slabinski, district president of Robert Half Technology, a human resources consulting firm.

“All those perks – the ping pong tables, the free snacks, the free day care — that started in the tech industry come with the job because the job is so demanding,” she said. “There’s a level of demand in the tech industry that translates to the work environment.”

When asked if Amazon is any harder on its employees than other major tech companies, Slabinski laughed.

“Amazon isn’t different culturally from other IT companies,” she said. “I’ve been doing this for 16 years. You see the good, the bad and the ugly. If you are working for tech companies, the expectation is you are going to work really hard. This is bleeding-edge technology, and the trade-off is there’s less work-life balance. The people who thrive in this industry, thrive on being on the bleeding edge. If you can’t take it, you go into another industry.”

Janulaitis noted that top-tier employees are always chased by other companies, but middle-tier workers – those who are doing a good job but might not be the brightest stars of the workforce – are hunkering down and staying put.

Fears of a still jittery job market have convinced a lot of people to keep their heads down, put up with whatever their managers ask of them and continue to be able to pay their mortgages, especially if they live in pricey Silicon Valley.

That, said Janulaitis, makes companies more apt to ask even more from their employees, who know they’re likely stuck where they are for now.

“Once the job market changes, turnover will increase significantly in the IT field,” he said.

Like stock traders working under extreme pressure on Wall Street or medical interns working 36-hour shifts, the tech industry is a high-stress environment – one that’s not suited to every worker.

“If you can’t live with that pressure, you should go somewhere else,” said Reynolds. “For people in Silicon Valley, it’s who they are. It’s the kind of person they are.”

MCTS Training, MCITP Trainnig

Best Microsoft MCTS Certification, Microsoft MCITP Training at


Sorriest technology companies of 2015

A rundown of the year in apologies from tech vendors and those whose businesses rely heavily on tech.

Sorry situation
Despite all the technology advances that have rolled out this year, it’s also been a sorry state of affairs among leading network and computing vendors, along with businesses that rely heavily on technology. Apple, Google, airlines and more have issued tech-related mea culpas in 2015…

Sony says Sorry by saying Thanks
Network outages caused by DDoS attacks spoiled holiday fun for those who got new PlayStation 4 games and consoles, so Sony kicked off 2015 with an offer of 10% off new purchases, plus an extended free trial for some.

NSA’s backdoor apology
After getting outted by Microsoft and later Edward Snowden for allowing backdoors to be inserted into devices via a key security standard, the NSA sort of apologized. NSA Director of Research Michael Wertheimer, in writing for the Notices of the American Mathematical Society, acknowledges mistakes were made in “The Mathematics Community and the NSA.” He wrote in part: “With hindsight, NSA should have ceased supporting the Dual_EC_DRBG algorithm immediately after security researchers discovered the potential for a trapdoor.”

You probably forgot about this flag controversy
China’s big WeChat messaging service apologized in January for bombarding many of its hundreds of millions of users – and not just those in the United States — with Stars and Stripes icons whenever they typed in the words “civil rights” on Martin Luther King, Jr. Day. WeChat also took heat for not offering any sort of special icons when users typed in patriotic Chinese terms. The special flag icons were only supposed to have been seen by US users of the service.

Go Daddy crosses the line
Web site domain provider Go Daddy as usual relied on scantily clad women as well as animals to spread its message during this past winter’s Super Bowl. The surprising thing is that the animals are what got the company in hot water this time. The company previewed an ad that was supposed to parody Budweiser commercials, but its puppy mill punch line didn’t have many people laughing, so the CEO wound up apologizing and pulling the ad.

Name calling at Comcast
Comcast scrambled to make right after somehow changing the name of a customer on his bill to “(expletive… rhymes with North Pole) Brown” from his actual name, Ricardo Brown. The change took place after Brown’s wife called Comcast to discontinue cable service. The service provider told a USA Today columnist that it was investigating the matter, but in the meantime was refunding the Browns for two years of previous service.

Where to start with Google?
Google’s Department of Apologies has been busy this year: In January the company apologized when its translation services spit out anti-gay slurs in response to searches on the terms “gay” and “homosexual.” In May, Google apologized after a Maps user embedded an image of the Android mascot urinating on Apple’s logo. This summer, Google has apologized for its new Photos app mislabeling African Americans as “gorillas” and for Google Niantic Labs’ Ingress augmented reality game including the sites of former Nazi concentration camps as points of interest.

Carnegie Mellon admissions SNAFU
Carnegie Mellon University’s Computer Science School in February apologized after it mistakenly accepted 800 applicants to its grad problem, only to send out rejection notices hours later. The irony of a computer glitch leading to this problem at such a renowned computer science school was lost on no one…

Lenovo Superfish debacle
Lenovo officials apologized in February after it was discovered that Superfish adware packaged with some of its consumer notebooks was not only a pain for users but also included a serious security flaw resulting from interception of encrypted traffic. “I have a bunch of very embarrassed engineers on my staff right now,” said Lenovo CTO Peter Hortensius. “They missed this.” Lenovo worked with Microsoft and others to give users tools to rid themselves of Superfish.

Apple apologizes for tuning out customers
Apple apologized in March for an 11-hour iTunes service and App Store outage that it blamed on “an internal DNS error at Apple,” in a statement to CNBC.

Blame the iPads
American Airlines in April apologized after digital map application problems on pilot iPads delayed dozens of flights over a two-day period. The airline did stress that the problem was a third-party app, not the Apple products themselves.

Locker awakened
The creator of a strain of ransomware called Locker apologized after he “woke up” the malware, which encrypted files on infected devices and asked for money to release them. A week after the ransomware was activated, the creator apparently had a changed of heart released decryption keys needed by victims to unlock their systems.

HTC wants to be Hero
Phonemaker HTC’s CEO Cher Wang, according to the Taipei Times in June, apologized to investors in June after the company’s new One M9 flagship phone failed to boost sales. “HTC’s recent performance has let people down,” said Wang, pointing to better times ahead with the planned fall release of a new phone dubbed Hero.

Ketchup for adults only
Ketchup maker Heinz apologized in June after an outdated contest-related QR code on its bottles sent a German man to an X-rated website. Meanwhile, the website operator offered the man who complained a free year’s worth of access, which he declined.

Livid Reddit users push out interim CEO
Interim Reddit CEO Ellen Pao apologized in July (“we screwed up”) after the online news aggregation site went nuts over the sudden dismissal of an influential employee known for her work on the site’s popular Ask Me Anything section. Pao shortly afterwards resigned from her post following continued demands for her ouster by site users.

Blame the router
United Airlines apologized (“we experienced a network connectivity issue. We are working to resolve and apologize for any inconvenience.”) in July after being forced to ground its flights for two hours one morning due to a technology issue that turned out to be router-related. United has suffered a string of tech glitches since adopting Continental’s passenger management system a few years back following its acquisition of the airline.

Billion dollar apology
Top Toshiba executives resigned in July following revelations that the company had systematically padded its profits by more than $1 billion over a six-year period. “I recognize there has been the most serious damage to our brand image in our 140-year history,” said outgoing President Hisao Tanaka, who is to be succeeded by Chairman Masashi Muromachi. “We take what the committee has pointed out very seriously, and it is I and others in management who bear responsibility.”

MCTS Training, MCITP Trainnig

Best Microsoft MCTS Certification, Microsoft MCITP Training at



Ultimate guide to Raspberry Pi operating systems, part 1

Raspberry Pi
Since we published a roundup of 10 Raspberry Pi operating systems the number of choices has exploded. In this piece I’m including every option I could find (and for you pickers of nits, yes, I’m counting individual Linux distros as individual operating systems, so sue me). If you know of anything I’ve missed or a detail that’s wrong, please drop me a note at and I’ll update the piece and give you a shout out.

Want to know immediately when the next installment of this guide is published? Sign up and you’ll be the first to know.

Now on with the awesomeness …

Adafruit – Occidentalis v0.3
Occidentalis v0.3 is the result of running Adafruit’s Pi Bootstrapper on a Raspbian installation to build a platform for teaching electronics using the Raspberry Pi. Arguably not a true distro (the previous versions were) it’s included because it’s kind of cool.

Arch Linux ARM
Arch Linux ARM is a fork of Arch Linux built for ARM processors. This distro has a long history of being used in a wide range of products, including the Pogoplug as well as the Raspberry Pi. It’s known for being both fast and stable. There is no default desktop but above, I show the option of Openbox.

BerryTerminal has not been updated for several years: “BerryTerminal is a minimal Linux distribution designed to turn the Raspberry Pi mini computer into a low-cost thin client. It allows users to login to a central Edubuntu or other [Linux Terminal Server Project] server, and run applications on the central server.”

DarkELEC: “None of the currently available solutions do a perfect job running XBMC on the Pi, however OpenELEC comes by far the closest, in spite of its locked down nature. [The DarkELEC] fork aims to remedy the very few flaws in its implementation and to focus 100% on the Pi, while also sticking to the upstream and incorporating its updates.”

Debian 8 (“Jessie”)
Debian 8 (“Jessie”) is the latest and greatest version of Debian and Sjoerd Simons of Collabora appears to be the first person to get it running on the Raspberry Pi 2 back in February this year. As of this writing, there isn’t an “official”release of Debian 8 for the Raspberry Pi so, if you go down this path, expect a few bumps (and complexities) on the way.

DietPi: “At its core, DietPi is the go to image for a minimal Raspbian/Debian Server install. We’ve stripped down and removed everything from the official Raspbian image to give us a bare minimal Raspbian server image that we call DietPi-Core.” DietPi is optimized for all Pi models and has a 120MB compressed image, fits on a 1GB or greater SD card, has only 11 running processes after boot, requires just 16MB of memory after boot, and, “unlike most Raspbian minimal images, ours includes full Wifi support.” An LXDE desktop is optional.

Fedora Remix (Pidora)
Fedora Remix (Pidora): Pidora is a Fedora Remix, a customized version of the Unix-like Fedora system, running on the ARM-based Raspberry Pi single board computer and it moves faster than a politician taking a donation. First released in 2003 Fedora has a long history and is noted for its stability. Given that there are thousands of packages available in the Pidora repository you’ll be able to find pretty much any functionality or service you need for your project.

GeeXboX ARM is a free and Open Source Media Center Linux distribution for embedded devices and desktop computers. GeeXboX is not an application, it’s a full-featured OS that can be booted from a LiveCD, from a USB key, an SD/MMC card or installed on an HDD. The core media delivery application os XBMC Media Center 12.2 “Frodo”.

IPFire is a specialized version of Linux that operates as a firewall. Designed to be highly secure and fast, it’s managed through a Web-based interface.

Kali Linux
Kali Linux is one of my favorite flavors of Linux because of its excellent collection of penetration testing and diagnostic tools (plus it has a great logo). Being able to run this bad boy on a Raspberry Pi means you can have your own custom pen tester in your pocket.

Lessbian 8.1 (“Raptor”)
Lessbian 8.1 (“Raptor”): A stripped down bare minimal Debian “Jessie”. The goal of Lessbian is to “provide a small and fast jessie image for servers and wifi security testing without the madness of system.” This release is described as “A bootable wifi system optimized for throughput, performance, and encryption”and it’s a great platform for running a Tor Relay.

Minepeon: There’s gold in them thar’ BitCoin mines! You can get it out using the Minepeon operating system based on Linux and running on a Raspberry Pi. Of course you’re going to need a lot of machines to get your digital “quan”given how much more “work”is needed to mine BitCoin today, but given the price of the Raspberry Pi you won’t go broke assembling a roomful of miners. Show me the digital money!

Moebius: A minimal ARM HF distribution that needs just 20Mb of RAM for the entire operating system and fits on a 128MB SD card. Version 2 is current stable version. An LXDE desktop is optional.

nOS: Based on Ubuntu and the KDE, this distro has been abandoned: “Development of nOS has stopped, existing versions will continue to work and receive updates from the package manufacturers until April 2019. The only things that will no longer be issued are updates for nOS specific software and the monthly image releases (they haven’t been going for a while anyway).”

OpenELEC, an acronym for Open Embedded Linux Entertainment Center, is a Linux-based OS that runs the popular XBMC open source digital media center software. The first release of OpenELEC was in 2013 and, according to the OpenELEC Wiki, “Installing OpenELEC for Raspberry Pi from a Linux computer is a very simple process and whether you’re new to Linux or a hardened *NIX user, you shouldn’t have any problems.”

OpenWrt for Raspberry Pi
OpenWrt for Raspberry Pi is “a Linux distribution for embedded devices.” Systems based on OpenWrt are most often used as routers and, with something like 3,500 optional add-on packages, its features can be tailored in pretty much anyway imaginable. Want an ultraportable, incredibly tiny wireless router that can be run anywhere? OpenWrt on a Raspberry Pi running off a battery with a USB WiFi dongle can only be described as “epic.”

Raspberry Digital Signage
Raspberry Digital Signage is based on Debian Linux running on a Raspberry Pi and used in Web kiosks and digital signage (including digital photo frames). A really well thought out system, Digital Signage is designed to be easily administered while being as “hacker-proof”as possible.

Raspberry Pi Thin Client
Raspberry Pi Thin Client: Creates a very low price thin client that supports Microsoft RDC, Citrix ICA, VMWare View, OpenNX and SPICE.

Raspbian Pisces R3
Raspbian Pisces R3: Another non-official distro, Raspbian Pisces created by Mike Thompson, is an SD image of Raspbian and creates a minimal Debian installation with the LXDE desktop.

Raspbian Server Edition
Raspbian Server Edition: A stripped-down version of Raspbian with some extra packages that boots to a command prompt. It is an excellent tool to use for testing hard float compilations and running benchmarks.

Raspbmc: Yet another distro that is designed for the popular XBMC open source digital media center, Raspbmc is lightweight and robust.

RaspEX (Edition 150706)
RaspEX (Edition 150706): RaspEX is a full Linux desktop system with LXDE and many other useful programs pre-installed. Chromium is used as Web Browser and Synaptic as Package Manager. RaspEX uses Ubuntu’s software repositories so you can install thousands of extra packages if you want.

Raspian Debian 7.8 (“Wheezy”)
Raspian Debian 7.8 (“Wheezy”): The Raspian Debian “Wheezy”distro for the Raspberry Pi is a fully functional Debian Wheezy installation containing the LXDE desktop, the Epiphany browser, Wolfram Mathematica, and Scratch. It supports the Raspberry Pi and the Raspberry Pi 2 and is the current Debian version supported by the Raspberry Pi Foundation.

Red Sleeve Linux
Red Sleeve Linux: “RedSleeve Linux is a 3rd party ARM port of a Linux distribution of a Prominent North American Enterprise Linux Vendor (PNAELV). They object to being referred to by name in the context of clones and ports of their distribution, but if you are aware of CentOS and Scientific Linux, you can probably guess what RedSleeve is based on. RedSleeve is different from CentOS and Scientific Linux in that it isn’t a mere clone of the upstream distribution it is based on –it is a port to a new platform, since the upstream distribution does not include a version for ARM.”

RISC OS Pi: Originally developed and released 1987 by UK-based Acorn Computers Ltd. RISC OS is, as the RISC OS Web site claims, “its own thing –a very specialized ARM-based operating system… if you’ve not used it before, you will find it doesn’t behave quite the same way as anything else.”. RISC OS Pi has been available on the Raspberry Pi since 2012.

SliTaz GNU/Linux Raspberry Pi
The SliTaz GNU/Linux Raspberry Pi distribution is “a small operating system for a small computer! The goal is to provide a fast, minimal footprint and optimized distro for the Raspberry Pi. You can setup a wide range of system types, from servers to desktops and learning platforms.”

Windows 10 IoT Core Edition
Windows 10 IoT Core Edition’s GUI stack is limited to Microsoft’s Universal App Platform so there’s no Windows desktop or even a command prompt. With PowerShell remoting you get a PowerShell terminal from which you can run Windows commands and see the output of native Win32 apps. Currently available as a preview version, there’s no support for Wi-Fi or Bluetooth.

In our next installment of Network World’s Ultimate Guide to Raspberry Pi Operating Systems we’ll be covering a whole new collection: Bodhi, Commodore Pi, FreeBSD, Gentoo, ha-pi, I2Pberry, Kano OS, MINIBIAN, motionPie, Nard, NetBSD, OSMC, PiBang Linux, PiBox, PiMAME, PiParted, Plan 9, PwnPi, RasPlex, Slackware ARM, SlaXBMCRPi, slrpi, Tiny Core Linux, Ubuntu, Volumio, XBian, and more.

Want to know immediately when the next installment is published? Sign up and you’ll be the first to know.
Want more Pi? Check out 10 Reasons why the Raspberry Pi 2 Model B is a killer product and MIPS Creator CI20: Sort of a challenge to the Raspberry Pi 2 Model B. What could be the next RPi? Check out Endless: A computer the rest of the world can afford and How low can we go? Introducing the $9 Linux computer!

MCTS Training, MCITP Trainnig

Best Microsoft MCTS Certification, Microsoft MCITP Training at

Why you need to care more about DNS

There’s one key part of your network infrastructure that you’re probably not monitoring, even though it keeps you connected, can tell you a lot about what’s happening inside your business – and is an increasing source of attacks. DNS isn’t just for domain names any more.

When you say Domain Name System (DNS), you might think, naturally enough, of domain names and the technical details of running your Internet connection. You might be concerned about denial of service attacks on your website, or someone hijacking and defacing it.

While those certainly matter, DNS isn’t just for looking up Web URLs any more; it’s used by software to check licences, by video services to get around firewalls and, all too often, by hackers stealing data out from your business. Plus, your employees may be gaily adding free DNS services to their devices that, at the very least, mean you’re not in full control of your network configuration. It’s a fundamental part of your infrastructure that’s key to business productivity, as well as a major avenue of attack, and you probably have very little idea of what’s going on.

6 simple tricks for protecting your passwords

DNS is the most ubiquitous protocol on the Internet, but it’s also probably the most ignored. Data Leak Protection (DLP) systems that check protocols used by email, web browsers, peer-to-peer software and even Tor, often neglect DNS. “Nobody looks much at DNS packets, even though DNS underlies everything,” says Cloudmark CTO Neil Cook. “There’s a lot of DLP done on web and email but DNS is sitting there, wide open.”

Data lost in the Sally Beauty breach last year was exfiltrated in packets disguised as DNS queries, but Cook points out some unexpected though legitimate uses; “Sophos uses DNS tunnelling to get signatures; we even use it for licensing.”

A number of vendors are starting to offer DNS tools, from Infoblox’s appliances to OpenDNS’ secure DNS service; Palo Alto Networks is starting to offer DNS inspection services, U.K. domain registry Nominet has just launched its Turing DNS visualisation tool to help businesses spot anomalies in their DNS traffic, and Cloudmark analyzes patterns of DNS behavior to help detect links in email going to sites that host malware. There are also any number of plugins for common monitoring tools that will give you basic visibility of what’s going on.

Few businesses do any monitoring of their DNS traffic despite it being the source of many attacks. It’s not just the malware that runs on Point of Sale systems, capturing customer credit cards in attacks like those on Sally Beauty, Home Depot and Target, that uses DNS tunnelling. DNS is the most ubiquitous command and control channel for malware, as well as being used to get data stolen by malware from your business.

“DNS is frequently used as a conduit to surreptitiously tunnel data in and out of the company,” says Cricket Liu, the chief DNS architect at Infoblox, “and the reason people who write malware are using DNS to tunnel out this traffic is because it’s so poorly monitored, most people have no idea what kind of queries are going over their DNS infrastructure.”

There’s also the problem of people using DNS to bypass network security controls; that might be employees avoiding network restrictions, security policies or content filtering, or it might be attackers avoiding detection.

DNS attacks are a widespread problem
In a recent Vanson Bourne study of U.S. and U.K. businesses, 75 percent said they’d suffered a DNS attack (including denial of service and DNS hijacking as well as data theft through DNS), with 49 percent having experienced an attack during 2014. Worryingly, 44 percent said it was hard to justify investments in DNS security because senior management didn’t recognize the issue.

That’s because they think of DNS as a utility, suggests Nominet CTO Simon McCalla. “For most CIOs, DNS is something that happens in the background and isn’t a high priority for them. As long as it works, they’re happy. However, what most of them don’t realize is that there is a wealth of information inside their DNS that tells them what is going on within their network internally.”

Liu is blunter: “I’m surprised how few organizations bother to do any kind of monitoring of their DNS infrastructure. DNS doesn’t get any respect, yet TCP/IP networks don’t work without DNS; it’s the unrecognized lynch pin.” Liu insists “it’s not rocket science to put in monitoring of your DNS infrastructure; there are lots of mechanisms out there for understanding what queries DNS servers are handling and their responses. And you really ought to be doing because this infrastructure is no less critical than the routing and switching infrastructure that actually moves packets across your network.”

Usually, he finds demonstrating the threat is enough to get management attention. “Most CIOs – once they see how with one compromised machine on the inside of a network you can set up a bi-directional channel between that endpoint and a server on the internet – realize they need to do something about this. It’s just a matter of being faced with that cold hard reality.”

Tackling DNS security

First, you need to stop thinking about DNS as being about networking and just “part of the plumbing,” says David Ulevitch, the CEO of OpenDNS (which Cisco is in the process of acquiring).

“It used to be network operators who ran your DNS, and they were looking at it in terms of making sure the firewall was open, and not blocking what they viewed as a critical element of connectivity as opposed to a key component of security policy, access control and auditing. But we live in a world today where every network operator has to be a security practitioner.”

If you actively manage your DNS, you can apply network controls at a level employees (and attackers) can’t work around. You can detect phishing attacks and malware command and control more efficiently at the DNS layer than using a web proxy or doing deep packet inspection, and you can detect it as it happens rather than days later.

“DNS is a very good early warning system,” says Liu. “You can pretty much at this point assume you have infected devices on your network. DNS is a good place to set up little tripwires, so when malware and other malicious software gets on your network, you can easily detect its presence and its activity, and you can do some things to minimize the damage it does.” You could even see how widespread the infection is, by looking for similar patterns of behaviour.

Services like OpenDNS and Infoblox can also look across more than your network. “It’s easy to build a baseline of what normal looks like and do anomaly detection”, says Ulevitch. “Suppose you’re an oil and gas business in Texas and a new domain name pops up in China pointing to an IP address in Europe, and no other oil company is looking at this domain. Why should you be the guinea pig?”

You also need to monitor how common addresses are resolved on your network – hackers can try to send links to sites like Paypal to their own malicious sites – and where your external domain points to. When Tesla’s website was recently redirected to a spoof page put up by hackers, who also took control of the company’s Twitter account (and used it to flood a small computer repair store in Illinois with calls from people they’d fooled into believing they’d won free cars), the attackers also changed the name servers used to resolve the domain name. Monitoring their DNS might have given Tesla a heads-up that something was wrong before users started tweeting pictures of the hacked site.

At the very least, remember that DNS underpins all your online services, Ulevitch points out. “The bar is very low for improving DNS. Usually, DNS is seen as a cost enter; people don’t invest in reliable enough infrastructure or high enough performance equipment so it’s hard to cope with a high volume of transactions.”

That doesn’t only matter if you’re targeted by a DNS attack. “Organizations should look at DNS performance because it will have a material impact on everything you do online. Every time you send an email or open an app you’re doing DNS requests. These days, web pages are very complex and it’s not uncommon to have more than 10 DNS requests to load a page. That can be a whole extra second or more, just to handle the DNS components of loading a page.”
Tracking business behavior

Monitoring DNS can also give you a lot of information about what’s going on across your business far beyond the network. “We live in a world where the network perimeter is becoming ephemeral and where services are easy to adopt,” Ulevitch points out. “A marketing executive can sign up to Salesforce; if you’re looking at the DNS you can see that. You can see how many employees are using Facebook. You can see devices showing up in your network, whether it’s because they’re checking a licence or doing data exfiltration. If you have a hundred offices, you can still see who is connecting devices.”

That’s not just PCs either, he points out; printers and televisions and IoT devices are increasingly connecting to your business network. “Do I want my TVs phoning home? If you look at the Samsung privacy policy, it says the TV has a microphone that might be listening at any time; do I really want that in the corporate boardroom? Maybe I want to apply DNS policies so my TVs can’t phone home.”

Infoblox’s Liu agrees. “IoT devices are often not designed with a lot of security in mind. You want to make sure devices are connecting where they should be and that if someone throws something else onto your IoT network they can’t access your internal network. DNS is a useful place to monitor and control that access.”

And because you’re already using DNS, monitoring it isn’t disruptive, Ulevitch points out. “Usually in security, the reason most things aren’t used is the effort needed to make sure they don’t have a detrimental effect on user performance.”

In fact, you need a good reason not to be doing this, he says. “There are fundamental best practices in security and one of them is network visibility. Not being able to see the traffic on your network means you’re flying blind. Finding a way to inspect DNS traffic is a fundamental requirement of a strong security posture. To not know what’s happening on your network is borderline derelict.”

MCTS Training, MCITP Trainnig

Best Microsoft MCTS Certification, Microsoft MCITP Training at

Biggest tech industry layoffs of 2015, so far

Microsoft, BlackBerry, NetApp among those trimming workforces

While the United States unemployment rate has hit a post-recession low, the network and computing industry has not been without significant layoffs so far in 2015.

Some companies’ workforce reductions are tricky to calculate, as layoff plans announced by the likes of HP in recent years have been spread across multiple years. But here’s a rundown of this year’s big layoffs either formally announced or widely reported on.

*Good Technology:
The secure mobile technology company, in prepping to go public in the near future, laid off more than 100 people late last year or early this year according to reports in January by Techcrunch and others. Privately-held Good, which employs more than 1,100 people according to its listing on LinkedIn, doesn’t comment on such actions. Though the company did say in an amended IPO filing in March that it would need to slash jobs this fiscal year if certain funding doesn’t come through. Good also showed improved financials, in terms of growing revenue and reduced losses, in that filing. Meanwhile, the company continues its business momentum with deals such as an extended global reseller agreement announced with Samsung Electronics America in June.

Reuters and others reported in January that Sony would be cutting around 1,000 jobs as a result of its smartphone division’s struggles. The Wall Street Journal in March wrote that Sony was clipping 2,000 of its 7,000 mobile unit workers as it attempts to eke out a profit and refocus, possibly on software, to fare better vs. Apple and other market leaders. Sony’s mobile business, despite solid reviews for its Xperia line of handsets, is nearly nonexistent in big markets such as the United States and China, according to the WSJ report. Still, the company’s president says Sony will never exit the market.

The company’s 900 job cuts, announced in January along with a restructuring and improved revenue, were described by one analyst as “defensive layoffs” made in view of some disconcerting macro economic indicators, such as lower oil prices and a strengthening dollar. The virtu company said its restructuring, including layoffs of 700 full-time employees and 200 contractors, would save it $90 million to $100 million per year as it battles VMware, Microsoft and others in the virtualization and cloud markets.

The company announced in May, while revealing disappointing financial results, that it would be laying off 500 people, or about 4% of its workforce. It’s the third straight year that the storage company has had workforce reductions, and industry watchers are increasingly down on NetApp . The company has been expanding its cloud offerings but has also been challenged by customers’ moves to the cloud and the emergence of new hyperconvergence players attacking its turf.

In scaling down its mobile phone activities, Microsoft is writing off the whole value of the former Nokia smartphone business it bought last year and laying off up to 7,800 people from that unit. Microsoft also announced 18,000 job cuts last year, including many from the Nokia buyout. Despite an apparent departure from the phone business, CEO Satya Nadella said Microsoft remains committed to Windows Phone products and working with partners.

The beleaguered smartphone maker acknowledged in May it was cutting an unspecified number of staff in its devices unit in an effort to return to profitability and focus in new areas, such as the Internet of Things (it did eke out a quarterly profit earlier this year, though is still on pace to register a loss for the year). The Waterloo, Ontario outfit said in a statement that it had decided to unite its device software, hardware and applications business, “impacting a number of employees around the world.” Then in July BlackBerry again said it was making job cuts, and again didn’t specify the number.

The wireless chipmaker is the latest whose name is attached to layoff speculation, and official word on this could come as soon as this week, given the company is announcing its quarterly results. The San Diego Union-Tribune reports that “deep cost cuts” could be in the offing, including thousands of layoffs, possibly equaling 10% of the staff. The company was commenting ahead of its earnings conference call on July 22. Qualcomm has been a high flyer in recent years as a result of the smartphone boom, but regulatory issues in China, market share gains by Apple and being snubbed by Samsung in its latest flagship phone have all hurt Qualcomm of late, the Union-Tribune reports.

*Lexmark: The printer and printer services company this month announced plans for 500 layoffs as part of a restructuring related to a couple of recent acquisitions. The $3.7 billion Kentucky-based company employs more than 12,000 people worldwide.


MCTS Training, MCITP Trainnig

Best Microsoft MCTS Certification, Microsoft MCITP Training at


The top 10 supercomputers in the world, 20 years ago

In 1995, the top-grossing film in the U.S. was Batman Forever. (Val Kilmer as Batman, Jim Carrey as the Riddler, Tommy Lee Jones as Two-Face. Yeah.) The L.A. Rams were moving back to St. Louis, and Michael Jordan was moving back to the Bulls. Violence was rife in the Balkans. The O.J. trial happened.

It was a very different time, to be sure. But all that was nothing compared to how different the world of supercomputing was.

The Top500 list from June 1995 shows just how far the possibilities of silicon have come in the past 20 years. Performance figures are listed in gigaflops, rather than the teraflops of today, meaning that, for example, the 10th-place entrant in this week’s newly released list is more than 84,513 times faster than its two-decades-ago equivalent.

#10: 1995 – Cray T3D-MC512-8, Pittsburgh Supercomputing Center, 50.8 GFLOP/S
The Pittsburgh Supercomputing Center is still an active facility, though none of its three named systems – Sherlock, Blacklight and Anton – appear on the latest Top500 list. The last time it was there was 2006, with a machine dubbed Big Ben placing 256th. (The PSC’s AlphaServer SC45 took second place in 2001 with a speed of 7,266 gigaflops.)

#9: 1995 – Cray T3D-MC512-8, Los Alamos National Laboratory, 50.8 GFLOP/S
Yes, it’s the same machine twice, which demonstrates that supercomputers were less likely to be bespoke systems filling giant rooms of their own, and more likely to be something you just bought from Cray or Intel. JUQUEEN is more than 98,600 times as powerful as the old T3D-MC512-8, a 512-core device that appears to have been more or less contained to a pair of big cabinets.

#8: 1995 – Thinking Machines CM-5/896, Minnesota Supercomputer Center, 52.3 GFLOP/S
Thinking Machines was an early supercomputer manufacturer, based in the Boston area, that had actually gone bankrupt already by the time the June 1995 Top500 list was published – Sun Microsystems would eventually acquire most of its assets in a 1996 buyout deal. The University of Minnesota’s HPC department is now the Minnesota Supercomputing Institute, whose new Mesabi system placed 141st on the latest list at 4.74 teraflops.

#7: 1995 – Fujitsu VPP500/42, Japan Atomic Energy Research Institute, 54.5 GFLOP/S
Fujitsu’s been a fixture on the Top500 since the list was first published in 1993, and 1995 was no exception, with the company picking up three of the top 10 spots. The Japan Atomic Energy Research Institute has dropped off the list since 2008, though it may be set to return soon, with the recent announcement that it had agreed to purchase a Silicon Graphics ICE X system with a theoretical top speed of 2.4 petaflops – which would place it just outside the top 25 on the latest list.

#6: 1995 – Thinking Machines CM-5/1056, Los Alamos National Laboratory, 59.7 GFLOP/S
For the record, we’re well over the 100,000x performance disparity between these two systems at this point. One thing that’s notable about 1995’s systems compared to today’s is the small number of cores – the CM-5 that placed sixth in 1995 used 1,056 cores, and the Fujitsu behind it used only 42. Per-core performance is still orders of magnitude higher today, but it’s worth noting that a huge proportion of the total performance increase is due to the vastly higher number of processor cores in use – no system on the 2015 list had fewer than 189,792, counting accelerators.

#5: 1995 – Fujitsu VPP500/80, Japan National Laboratory for High Energy Physics, 98.9 GFLOP/S
The power factor is back down to about 87,000 with the substantial jump in performance up to the 80-core Fujitsu’s nearly 100 gigaflop mark. The VPP500/80 would remain on the list through 1999, never dropping below the 90th position.

#4: 1995 – Cray T3D MC1024-8, undisclosed U.S. government facility, 100.5 GFLOP/S
The T3D MC1024-8 system used at an undisclosed government facility (which is almost certainly not the NSA, of course) was the first on the 1995 list to top the 100 gigaflop mark, and stayed on the Top500 until 2001. That’s a solid run, and one that the Fujitsu K computer, on its fourth year in the top 5, could do well to emulate.

#3: 1995 – Intel XP/S-MP 150, Oak Ridge National Laboratory, 127.1 GFLOP/S
The Department of Energy’s strong presence on the upper rungs of the Top500 list is one thing that hasn’t changed in 20 years, it seems – four of the top 10 in both 2015 and 1995 were administered by the DOE. The XP/S-MP 150 system boasts roughly three times as many processor cores than all but one other entry on the list, at 3,072, in a sign of things to come.

#2: 1995 – Intel XP/S140, Sandia National Laboratory, 143.4 GFLOP/S
Indeed, the other Intel system on the 1995 list was the only other one with more cores, at 3,608. It’s even starting to look more like a modern supercomputer.

#1: 1995 – Fujitsu Numerical Wind Tunnel, National Aerospace Laboratory of Japan, 170 GFLOP/S
The Numerical Wind Tunnel, as the name suggests, was used for fluid dynamics simulations in aerospace research, most notably the classic wind tunnel testing to measure stability and various forces acting on an airframe at speed. The 2015 winner, China’s Tianhe-2, is almost two hundred thousand times as powerful, however.


MCTS Training, MCITP Trainnig

Best Microsoft MCTS Certification, Microsoft MCITP Training at

Why the open source business model is a failure

Most open source companies can’t thrive by selling maintenance and support subscriptions. But the cloud may be the key to revenue generation.

Open source software companies must move to the cloud and add proprietary code to their products to succeed. The current business model is recipe for failure.

That’s the conclusion of Peter Levine, a partner at Andreessen Horowitz, the Silicon Valley venture capital firm that backed Facebook, Skype, Twitter and Box as startups. Levine is also former CEO of XenSource, a company that commercialized products based on the open source Xen hypervisor.
INSIDER: 4 open-source monitoring tools that deserve a look

Levine says the conventional open source business model is flawed: Open source companies that charge for maintenance, support, warranties and indemnities for an application or operating system that is available for free simply can’t generate enough revenue.

“That means open source companies have a problem investing in innovation, making them dependent on the open source community to come up with innovations,” he says.

Why is that a problem? After all, the community-based open source development model has proved itself to be more than capable of coming up with innovative and very useful pieces of software.
Revenue limits

The answer is that without adequate funding, open source businesses can’t differentiate their products significantly from the open source code their products are based on, Levine maintains. Because of that there’s less incentive for potential customers to pay for their products rather than continue using the underlying code for nothing. At the very least it limits the amount that open source businesses can hope to charge – putting a cap on their potential revenues. It’s a vicious circle.

“If we look at Red Hat’s market, 50 percent of potential customers may use Fedora (the free Linux distribution,) and 50 percent use Red Hat Enterprise Linux (the version which is supported and maintained by Red Hat on a subscription basis.) So a large part of the potential market is carved off – why should people pay the ‘Red Hat tax’?” Levine asks.

You could argue that this is actually good for businesses, because the availability of open source software at no cost provides competition to open source companies’ offerings based on the same code, ensuring that these offerings are available at a very reasonable price.

But if open source businesses can’t monetize their products effectively enough to invest in innovation, then potential corporate clients can’t benefit from the fruits of that innovation, and that’s not so good for customers.
Uneven playing field

The problem is compounded when you consider that open source companies’ products are not just competing with the freely available software on which their products are built. It’s often the case that they also have to compete with similar products sold by proprietary software companies. And that particular playing field is often an uneven one, because the low revenues that open source companies can generate from subscriptions mean that they can’t match the huge sales and marketing budgets of competitors with proprietary product offerings.

It’s an important point because although sales and marketing activities are costly, they’re also effective. If they weren’t, companies wouldn’t waste money on them.

So it follows that open source companies miss out on sales even when they have a superior offering, because having the best product isn’t enough. It’s also necessary to convince customers to buy it, through clever marketing and persuasive sales efforts.

The problem, summed up by Tony Wasserman, a professor of software management practice at Carnegie Mellon University, is that when you’re looking to acquire new software, “open source companies won’t take you out to play golf.”

The result, says Levine, is that open source companies simply can’t compete with proprietary vendors on equal terms. “If you look at Red Hat, MySQL, KVM … in every case where there’s a proprietary vendor competing, they have more business traction and much more revenue than their open source counterparts.”

As an illustration of the scale of the problem, Red Hat is generally held up as the poster child of open source companies. It offers an operating system and a server virtualization system, yet its total revenues are about a third of specialist virtualization vendor VMware, and about 1/40th of Microsoft’s.
Hybrid future

This is why Levine has concluded that the way for open source companies to make money out of open source software is to abandon the standard open source business model of selling support and maintenance subscriptions, and instead to use open source software as a platform on which to build software as a service (SaaS) offerings.

“I can run a SaaS product by using Fedora as a base, but then building proprietary stuff on top and selling the service. So the monetization goes to the SaaS product, not to an open source product,” says Levine. “I think we’ll start to see an increasing number of SaaS offerings that are a hybrid of open source and proprietary software.”

[Related: Can LibreOffice successfully compete with Microsoft Office?]

He adds that many SaaS companies – including Salesforce, Digital Ocean and Github (two companies Andreessen Horowitz has invested in) – already use a mix of open source and proprietary software to build their services.

And Levine says that Facebook is the biggest open source software company of them all. “I was shocked when I realized this, and Google probably is the second biggest,” he says.

Facebook has developed and uses open source software for the infrastructure on which its social network is built, and adds its own proprietary software on top to produce a service it can monetize. Google also generates a large volume of open source infrastructure code, although its search and advertising software is proprietary, he adds.

While the existence of free-to-download software undoubtedly makes it harder for open source businesses to monetize the same software by adding support, maintenance and so on, it’s also the case that these low-cost alternatives must make life more difficult than otherwise for proprietary vendors trying to sell their products into the same market.

That’s because these low-cost alternatives necessarily make the market for proprietary software smaller even if proprietary companies have higher revenues that they can use to innovate, differentiate their products, and market them.

This could help explain why some proprietary software companies are moving their products to the cloud, or at least creating SaaS alternatives. A mature product like Microsoft’s Office suite can largely be functionally replicated by an open source alternative like LibreOffice, but Microsoft’s cloud-based Office 365 product takes the base Office functionality and adds extra services such as file storage, Active Directory integration and mobile apps on top.

That’s much harder for anyone to replicate, open source or not. And it suggests that in the future it will be all software companies, not just open source shops that move to the cloud to offer their software as a service.

MCTS Training, MCITP Trainnig

Best Microsoft MCTS Certification, Microsoft MCITP Training at

Attackers abuse legacy routing protocol to amplify distributed denial-of-service attacks

Servers could be haunted by a ghost from the 1980s, as hackers have started abusing an obsolete routing protocol to launch distributed denial-of-service attacks.

DDoS attacks observed in May by the research team at Akamai abused home and small business (SOHO) routers that still support Routing Information Protocol version 1 (RIPv1). This protocol is designed to allow routers on small networks to exchange information about routes.

RIPv1 was first introduced in 1988 and was retired as an Internet standard in 1996 due to multiple deficiencies, including lack of authentication. These were addressed in RIP version 2, which is still in use today.
6 simple tricks for protecting your passwords

In the DDoS attacks seen by Akamai, which peaked at 12.8 gigabits per second, the attackers used about 500 SOHO routers that are still configured for RIPv1 in order to reflect and amplify their malicious traffic.

DDoS reflection is a technique that can be used to hide the real source of the attack, while amplification allows the attackers to increase the amount of traffic they can generate.

RIP allows a router to ask other routers for information stored in their routing tables. The problem is that the source IP (Internet Protocol) address of such a request can be spoofed, so the responding routers can be tricked to send their information to an IP address chosen by attackers—like the IP address of an intended victim.

This is a reflection attack because the victim will receive unsolicited traffic from abused routers, not directly from systems controlled by the attackers.

But there’s another important aspect to this technique: A typical RIPv1 request is 24-byte in size, but if the responses generated by abused routers are larger than that, attackers can generate more traffic they could otherwise do with the bandwidth at their disposal.

In the attacks observed by Akamai, the abused routers responded with multiple 504-byte payloads—in some cases 10—for every 24-byte query, achieving a 13,000 percent amplification.

Other protocols can also be exploited for DDoS reflection and amplification if servers are not configured correctly, including DNS (Domain Name System), mDNS (multicast DNS), NTP (Network Time Protocol) and SNMP (Simple Network Management Protocol).

The Akamai team scanned the Internet and found 53,693 devices that could be used for DDoS reflection using the RIPv1 protocol. Most of them were home and small business routers.

The researchers were able to determine the device make and model for more than 20,000 of them, because they also had their Web-based management interfaces exposed to the Internet.

Around 19,000 were Netopia 3000 and 2000 series DSL routers distributed by ISPs, primarily from the U.S., to their customers. AT&T had the largest concentration of these devices on its network—around 10,000—followed by BellSouth and MegaPath, each with 4,000.

More than 4,000 of the RIPv1 devices found by Akamai were ZTE ZXV10 ADSL modems and a few hundred were TP-Link TD-8xxx series routers.

While all of these devices can be used for DDoS reflection, not all of them are suitable for amplification. Many respond to RIPv1 queries with a single route, but the researchers identified 24,212 devices that offered at least an 83 percent amplification rate.

To avoid falling victim to RIPv1-based attacks, server owners should use access control lists to restrict Internet traffic on UDP source port 520, the Akamai researchers said in their report. Meanwhile, the owners of RIPv1-enabled devices should switch to RIPv2, restrict the protocol’s use to the internal network only or, if neither of those options is viable, use access control lists to restrict RIPv1 traffic only to neighboring routers.

MCTS Training, MCITP Trainnig

Best Microsoft MCTS Certification, Microsoft MCITP Training at



7 command line tools for monitoring your Linux system

Here is a selection of basic command line tools that will make your exploration and optimization in Linux easier.

Dive on in
One of the great things about Linux is how deeply you can dive into the system to explore how it works and to look for opportunities to fine tune performance or diagnose problems. Here is a selection of basic command line tools that will make your exploration and optimization easier. Most of these commands are already built into your Linux system, but in case they aren’t, just Google “install”, the command name, and the name of your distro and you’ll find which package needs installing (note that some commands are bundled with other commands in a package that has a different name from the one you’re looking for). If you have any other tools you use, let me know for our next Linux Tools roundup.

How we did it
FYI: The screenshots in this collection were created on Debian Linux 8.1 (“Jessie”) running in a virtual machine under Oracle VirtualBox 4.3.28 under OS X 10.10.3 (“Yosemite”). See my next slideshow “How to install Debian Linux in a VirtualBox VM” for a tutorial on how to build your own Debian VM.

Top command
One of the simpler Linux system monitoring tools, the top command comes with pretty much every flavor of Linux. This is the default display, but pressing the “z” key switches the display to color. Other hot keys and command line switches control things such as the display of summary and memory information (the second through fourth lines), sorting the list according to various criteria, killing tasks, and so on (you can find the complete list at here).

Htop is a more sophisticated alternative to top. Wikipedia: “Users often deploy htop in cases where Unix top does not provide enough information about the systems processes, for example when trying to find minor memory leaks in applications. Htop is also popularly used interactively as a system monitor. Compared to top, it provides a more convenient, cursor-controlled interface for sending signals to processes.” (For more detail go here.)

Vmstat is a simpler tool for monitoring your Linux system performance statistics but that makes it highly suitable for use in shell scripts. Fire up your regex-fu and you can do some amazing things with vmstat and cron jobs. “The first report produced gives averages since the last reboot. Additional reports give information on a sampling period of length delay. The process and memory reports are instantaneous in either case” (go here for more info.).

The ps command shows a list of running processes. In this case, I’ve used the “-e”switch to show everything, that is, all processes running (I’ve scrolled back to the top of the output otherwise the column names wouldn’t be visible). This command has a lot of switches that allow you to format the output as needed. Add a little of the aforementioned regex-fu and you’ve got a powerful tool. Go here for the full details.

Pstree “shows running processes as a tree. The tree is rooted at either pid or init if pid is omitted. If a user name is specified, all process trees rooted at processes owned by that user are shown.”This is a really useful tool as the tree helps you sort out which process is dependent on which process (go here).

Understanding just how an app uses memory is often crucial in debugging, and the pmap produces just such information when given a process ID (PID). The screenshot shows the medium weight output generated by using the “-x”switch. You can get pmap to produce even more detailed information using the “-X”switch but you’ll need a much wider terminal window.

A crucial factor in your Linux system’s performance is processor and storage usage, which are what the iostat command reports on. As with the ps command, iostat has loads of switches that allow you to select the output format you need as well as sample performance over a time period and then repeat that sampling a number of times before reporting. See here.

MCTS Training, MCITP Trainnig

Best Microsoft MCTS Certification, Microsoft MCITP Training at