Archive for the ‘Tech’ Category

Biggest tech industry layoffs of 2015, so far

Microsoft, BlackBerry, NetApp among those trimming workforces

While the United States unemployment rate has hit a post-recession low, the network and computing industry has not been without significant layoffs so far in 2015.

Some companies’ workforce reductions are tricky to calculate, as layoff plans announced by the likes of HP in recent years have been spread across multiple years. But here’s a rundown of this year’s big layoffs either formally announced or widely reported on.

*Good Technology:
The secure mobile technology company, in prepping to go public in the near future, laid off more than 100 people late last year or early this year according to reports in January by Techcrunch and others. Privately-held Good, which employs more than 1,100 people according to its listing on LinkedIn, doesn’t comment on such actions. Though the company did say in an amended IPO filing in March that it would need to slash jobs this fiscal year if certain funding doesn’t come through. Good also showed improved financials, in terms of growing revenue and reduced losses, in that filing. Meanwhile, the company continues its business momentum with deals such as an extended global reseller agreement announced with Samsung Electronics America in June.

*Sony:
Reuters and others reported in January that Sony would be cutting around 1,000 jobs as a result of its smartphone division’s struggles. The Wall Street Journal in March wrote that Sony was clipping 2,000 of its 7,000 mobile unit workers as it attempts to eke out a profit and refocus, possibly on software, to fare better vs. Apple and other market leaders. Sony’s mobile business, despite solid reviews for its Xperia line of handsets, is nearly nonexistent in big markets such as the United States and China, according to the WSJ report. Still, the company’s president says Sony will never exit the market.

*Citrix:
The company’s 900 job cuts, announced in January along with a restructuring and improved revenue, were described by one analyst as “defensive layoffs” made in view of some disconcerting macro economic indicators, such as lower oil prices and a strengthening dollar. The virtu company said its restructuring, including layoffs of 700 full-time employees and 200 contractors, would save it $90 million to $100 million per year as it battles VMware, Microsoft and others in the virtualization and cloud markets.

*NetApp:
The company announced in May, while revealing disappointing financial results, that it would be laying off 500 people, or about 4% of its workforce. It’s the third straight year that the storage company has had workforce reductions, and industry watchers are increasingly down on NetApp . The company has been expanding its cloud offerings but has also been challenged by customers’ moves to the cloud and the emergence of new hyperconvergence players attacking its turf.

*Microsoft:
In scaling down its mobile phone activities, Microsoft is writing off the whole value of the former Nokia smartphone business it bought last year and laying off up to 7,800 people from that unit. Microsoft also announced 18,000 job cuts last year, including many from the Nokia buyout. Despite an apparent departure from the phone business, CEO Satya Nadella said Microsoft remains committed to Windows Phone products and working with partners.

*BlackBerry:
The beleaguered smartphone maker acknowledged in May it was cutting an unspecified number of staff in its devices unit in an effort to return to profitability and focus in new areas, such as the Internet of Things (it did eke out a quarterly profit earlier this year, though is still on pace to register a loss for the year). The Waterloo, Ontario outfit said in a statement that it had decided to unite its device software, hardware and applications business, “impacting a number of employees around the world.” Then in July BlackBerry again said it was making job cuts, and again didn’t specify the number.

*Qualcomm:
The wireless chipmaker is the latest whose name is attached to layoff speculation, and official word on this could come as soon as this week, given the company is announcing its quarterly results. The San Diego Union-Tribune reports that “deep cost cuts” could be in the offing, including thousands of layoffs, possibly equaling 10% of the staff. The company was commenting ahead of its earnings conference call on July 22. Qualcomm has been a high flyer in recent years as a result of the smartphone boom, but regulatory issues in China, market share gains by Apple and being snubbed by Samsung in its latest flagship phone have all hurt Qualcomm of late, the Union-Tribune reports.

*Lexmark: The printer and printer services company this month announced plans for 500 layoffs as part of a restructuring related to a couple of recent acquisitions. The $3.7 billion Kentucky-based company employs more than 12,000 people worldwide.


 

MCTS Training, MCITP Trainnig

Best Microsoft MCTS Certification, Microsoft MCITP Training at certkingdom.com

 

The top 10 supercomputers in the world, 20 years ago

In 1995, the top-grossing film in the U.S. was Batman Forever. (Val Kilmer as Batman, Jim Carrey as the Riddler, Tommy Lee Jones as Two-Face. Yeah.) The L.A. Rams were moving back to St. Louis, and Michael Jordan was moving back to the Bulls. Violence was rife in the Balkans. The O.J. trial happened.

It was a very different time, to be sure. But all that was nothing compared to how different the world of supercomputing was.

The Top500 list from June 1995 shows just how far the possibilities of silicon have come in the past 20 years. Performance figures are listed in gigaflops, rather than the teraflops of today, meaning that, for example, the 10th-place entrant in this week’s newly released list is more than 84,513 times faster than its two-decades-ago equivalent.

#10: 1995 – Cray T3D-MC512-8, Pittsburgh Supercomputing Center, 50.8 GFLOP/S
The Pittsburgh Supercomputing Center is still an active facility, though none of its three named systems – Sherlock, Blacklight and Anton – appear on the latest Top500 list. The last time it was there was 2006, with a machine dubbed Big Ben placing 256th. (The PSC’s AlphaServer SC45 took second place in 2001 with a speed of 7,266 gigaflops.)

#9: 1995 – Cray T3D-MC512-8, Los Alamos National Laboratory, 50.8 GFLOP/S
Yes, it’s the same machine twice, which demonstrates that supercomputers were less likely to be bespoke systems filling giant rooms of their own, and more likely to be something you just bought from Cray or Intel. JUQUEEN is more than 98,600 times as powerful as the old T3D-MC512-8, a 512-core device that appears to have been more or less contained to a pair of big cabinets.

#8: 1995 – Thinking Machines CM-5/896, Minnesota Supercomputer Center, 52.3 GFLOP/S
Thinking Machines was an early supercomputer manufacturer, based in the Boston area, that had actually gone bankrupt already by the time the June 1995 Top500 list was published – Sun Microsystems would eventually acquire most of its assets in a 1996 buyout deal. The University of Minnesota’s HPC department is now the Minnesota Supercomputing Institute, whose new Mesabi system placed 141st on the latest list at 4.74 teraflops.

#7: 1995 – Fujitsu VPP500/42, Japan Atomic Energy Research Institute, 54.5 GFLOP/S
Fujitsu’s been a fixture on the Top500 since the list was first published in 1993, and 1995 was no exception, with the company picking up three of the top 10 spots. The Japan Atomic Energy Research Institute has dropped off the list since 2008, though it may be set to return soon, with the recent announcement that it had agreed to purchase a Silicon Graphics ICE X system with a theoretical top speed of 2.4 petaflops – which would place it just outside the top 25 on the latest list.

#6: 1995 – Thinking Machines CM-5/1056, Los Alamos National Laboratory, 59.7 GFLOP/S
For the record, we’re well over the 100,000x performance disparity between these two systems at this point. One thing that’s notable about 1995’s systems compared to today’s is the small number of cores – the CM-5 that placed sixth in 1995 used 1,056 cores, and the Fujitsu behind it used only 42. Per-core performance is still orders of magnitude higher today, but it’s worth noting that a huge proportion of the total performance increase is due to the vastly higher number of processor cores in use – no system on the 2015 list had fewer than 189,792, counting accelerators.

#5: 1995 – Fujitsu VPP500/80, Japan National Laboratory for High Energy Physics, 98.9 GFLOP/S
The power factor is back down to about 87,000 with the substantial jump in performance up to the 80-core Fujitsu’s nearly 100 gigaflop mark. The VPP500/80 would remain on the list through 1999, never dropping below the 90th position.

#4: 1995 – Cray T3D MC1024-8, undisclosed U.S. government facility, 100.5 GFLOP/S
The T3D MC1024-8 system used at an undisclosed government facility (which is almost certainly not the NSA, of course) was the first on the 1995 list to top the 100 gigaflop mark, and stayed on the Top500 until 2001. That’s a solid run, and one that the Fujitsu K computer, on its fourth year in the top 5, could do well to emulate.

#3: 1995 – Intel XP/S-MP 150, Oak Ridge National Laboratory, 127.1 GFLOP/S
The Department of Energy’s strong presence on the upper rungs of the Top500 list is one thing that hasn’t changed in 20 years, it seems – four of the top 10 in both 2015 and 1995 were administered by the DOE. The XP/S-MP 150 system boasts roughly three times as many processor cores than all but one other entry on the list, at 3,072, in a sign of things to come.
supercomputers

#2: 1995 – Intel XP/S140, Sandia National Laboratory, 143.4 GFLOP/S
Indeed, the other Intel system on the 1995 list was the only other one with more cores, at 3,608. It’s even starting to look more like a modern supercomputer.

#1: 1995 – Fujitsu Numerical Wind Tunnel, National Aerospace Laboratory of Japan, 170 GFLOP/S
The Numerical Wind Tunnel, as the name suggests, was used for fluid dynamics simulations in aerospace research, most notably the classic wind tunnel testing to measure stability and various forces acting on an airframe at speed. The 2015 winner, China’s Tianhe-2, is almost two hundred thousand times as powerful, however.


 

MCTS Training, MCITP Trainnig

Best Microsoft MCTS Certification, Microsoft MCITP Training at certkingdom.com

Why the open source business model is a failure

Most open source companies can’t thrive by selling maintenance and support subscriptions. But the cloud may be the key to revenue generation.

Open source software companies must move to the cloud and add proprietary code to their products to succeed. The current business model is recipe for failure.

That’s the conclusion of Peter Levine, a partner at Andreessen Horowitz, the Silicon Valley venture capital firm that backed Facebook, Skype, Twitter and Box as startups. Levine is also former CEO of XenSource, a company that commercialized products based on the open source Xen hypervisor.
INSIDER: 4 open-source monitoring tools that deserve a look

Levine says the conventional open source business model is flawed: Open source companies that charge for maintenance, support, warranties and indemnities for an application or operating system that is available for free simply can’t generate enough revenue.

“That means open source companies have a problem investing in innovation, making them dependent on the open source community to come up with innovations,” he says.

Why is that a problem? After all, the community-based open source development model has proved itself to be more than capable of coming up with innovative and very useful pieces of software.
Revenue limits

The answer is that without adequate funding, open source businesses can’t differentiate their products significantly from the open source code their products are based on, Levine maintains. Because of that there’s less incentive for potential customers to pay for their products rather than continue using the underlying code for nothing. At the very least it limits the amount that open source businesses can hope to charge – putting a cap on their potential revenues. It’s a vicious circle.

“If we look at Red Hat’s market, 50 percent of potential customers may use Fedora (the free Linux distribution,) and 50 percent use Red Hat Enterprise Linux (the version which is supported and maintained by Red Hat on a subscription basis.) So a large part of the potential market is carved off – why should people pay the ‘Red Hat tax’?” Levine asks.

You could argue that this is actually good for businesses, because the availability of open source software at no cost provides competition to open source companies’ offerings based on the same code, ensuring that these offerings are available at a very reasonable price.

But if open source businesses can’t monetize their products effectively enough to invest in innovation, then potential corporate clients can’t benefit from the fruits of that innovation, and that’s not so good for customers.
Uneven playing field

The problem is compounded when you consider that open source companies’ products are not just competing with the freely available software on which their products are built. It’s often the case that they also have to compete with similar products sold by proprietary software companies. And that particular playing field is often an uneven one, because the low revenues that open source companies can generate from subscriptions mean that they can’t match the huge sales and marketing budgets of competitors with proprietary product offerings.

It’s an important point because although sales and marketing activities are costly, they’re also effective. If they weren’t, companies wouldn’t waste money on them.

So it follows that open source companies miss out on sales even when they have a superior offering, because having the best product isn’t enough. It’s also necessary to convince customers to buy it, through clever marketing and persuasive sales efforts.

The problem, summed up by Tony Wasserman, a professor of software management practice at Carnegie Mellon University, is that when you’re looking to acquire new software, “open source companies won’t take you out to play golf.”

The result, says Levine, is that open source companies simply can’t compete with proprietary vendors on equal terms. “If you look at Red Hat, MySQL, KVM … in every case where there’s a proprietary vendor competing, they have more business traction and much more revenue than their open source counterparts.”

As an illustration of the scale of the problem, Red Hat is generally held up as the poster child of open source companies. It offers an operating system and a server virtualization system, yet its total revenues are about a third of specialist virtualization vendor VMware, and about 1/40th of Microsoft’s.
Hybrid future

This is why Levine has concluded that the way for open source companies to make money out of open source software is to abandon the standard open source business model of selling support and maintenance subscriptions, and instead to use open source software as a platform on which to build software as a service (SaaS) offerings.

“I can run a SaaS product by using Fedora as a base, but then building proprietary stuff on top and selling the service. So the monetization goes to the SaaS product, not to an open source product,” says Levine. “I think we’ll start to see an increasing number of SaaS offerings that are a hybrid of open source and proprietary software.”

[Related: Can LibreOffice successfully compete with Microsoft Office?]

He adds that many SaaS companies – including Salesforce, Digital Ocean and Github (two companies Andreessen Horowitz has invested in) – already use a mix of open source and proprietary software to build their services.

And Levine says that Facebook is the biggest open source software company of them all. “I was shocked when I realized this, and Google probably is the second biggest,” he says.

Facebook has developed and uses open source software for the infrastructure on which its social network is built, and adds its own proprietary software on top to produce a service it can monetize. Google also generates a large volume of open source infrastructure code, although its search and advertising software is proprietary, he adds.

While the existence of free-to-download software undoubtedly makes it harder for open source businesses to monetize the same software by adding support, maintenance and so on, it’s also the case that these low-cost alternatives must make life more difficult than otherwise for proprietary vendors trying to sell their products into the same market.

That’s because these low-cost alternatives necessarily make the market for proprietary software smaller even if proprietary companies have higher revenues that they can use to innovate, differentiate their products, and market them.

This could help explain why some proprietary software companies are moving their products to the cloud, or at least creating SaaS alternatives. A mature product like Microsoft’s Office suite can largely be functionally replicated by an open source alternative like LibreOffice, but Microsoft’s cloud-based Office 365 product takes the base Office functionality and adds extra services such as file storage, Active Directory integration and mobile apps on top.

That’s much harder for anyone to replicate, open source or not. And it suggests that in the future it will be all software companies, not just open source shops that move to the cloud to offer their software as a service.

MCTS Training, MCITP Trainnig

Best Microsoft MCTS Certification, Microsoft MCITP Training at certkingdom.com

Attackers abuse legacy routing protocol to amplify distributed denial-of-service attacks

Servers could be haunted by a ghost from the 1980s, as hackers have started abusing an obsolete routing protocol to launch distributed denial-of-service attacks.

DDoS attacks observed in May by the research team at Akamai abused home and small business (SOHO) routers that still support Routing Information Protocol version 1 (RIPv1). This protocol is designed to allow routers on small networks to exchange information about routes.

RIPv1 was first introduced in 1988 and was retired as an Internet standard in 1996 due to multiple deficiencies, including lack of authentication. These were addressed in RIP version 2, which is still in use today.
6 simple tricks for protecting your passwords

In the DDoS attacks seen by Akamai, which peaked at 12.8 gigabits per second, the attackers used about 500 SOHO routers that are still configured for RIPv1 in order to reflect and amplify their malicious traffic.

DDoS reflection is a technique that can be used to hide the real source of the attack, while amplification allows the attackers to increase the amount of traffic they can generate.

RIP allows a router to ask other routers for information stored in their routing tables. The problem is that the source IP (Internet Protocol) address of such a request can be spoofed, so the responding routers can be tricked to send their information to an IP address chosen by attackers—like the IP address of an intended victim.

This is a reflection attack because the victim will receive unsolicited traffic from abused routers, not directly from systems controlled by the attackers.

But there’s another important aspect to this technique: A typical RIPv1 request is 24-byte in size, but if the responses generated by abused routers are larger than that, attackers can generate more traffic they could otherwise do with the bandwidth at their disposal.

In the attacks observed by Akamai, the abused routers responded with multiple 504-byte payloads—in some cases 10—for every 24-byte query, achieving a 13,000 percent amplification.

Other protocols can also be exploited for DDoS reflection and amplification if servers are not configured correctly, including DNS (Domain Name System), mDNS (multicast DNS), NTP (Network Time Protocol) and SNMP (Simple Network Management Protocol).

The Akamai team scanned the Internet and found 53,693 devices that could be used for DDoS reflection using the RIPv1 protocol. Most of them were home and small business routers.

The researchers were able to determine the device make and model for more than 20,000 of them, because they also had their Web-based management interfaces exposed to the Internet.

Around 19,000 were Netopia 3000 and 2000 series DSL routers distributed by ISPs, primarily from the U.S., to their customers. AT&T had the largest concentration of these devices on its network—around 10,000—followed by BellSouth and MegaPath, each with 4,000.

More than 4,000 of the RIPv1 devices found by Akamai were ZTE ZXV10 ADSL modems and a few hundred were TP-Link TD-8xxx series routers.

While all of these devices can be used for DDoS reflection, not all of them are suitable for amplification. Many respond to RIPv1 queries with a single route, but the researchers identified 24,212 devices that offered at least an 83 percent amplification rate.

To avoid falling victim to RIPv1-based attacks, server owners should use access control lists to restrict Internet traffic on UDP source port 520, the Akamai researchers said in their report. Meanwhile, the owners of RIPv1-enabled devices should switch to RIPv2, restrict the protocol’s use to the internal network only or, if neither of those options is viable, use access control lists to restrict RIPv1 traffic only to neighboring routers.


MCTS Training, MCITP Trainnig

Best Microsoft MCTS Certification, Microsoft MCITP Training at certkingdom.com

 

 

7 command line tools for monitoring your Linux system

Here is a selection of basic command line tools that will make your exploration and optimization in Linux easier.

Dive on in
One of the great things about Linux is how deeply you can dive into the system to explore how it works and to look for opportunities to fine tune performance or diagnose problems. Here is a selection of basic command line tools that will make your exploration and optimization easier. Most of these commands are already built into your Linux system, but in case they aren’t, just Google “install”, the command name, and the name of your distro and you’ll find which package needs installing (note that some commands are bundled with other commands in a package that has a different name from the one you’re looking for). If you have any other tools you use, let me know for our next Linux Tools roundup.

How we did it
FYI: The screenshots in this collection were created on Debian Linux 8.1 (“Jessie”) running in a virtual machine under Oracle VirtualBox 4.3.28 under OS X 10.10.3 (“Yosemite”). See my next slideshow “How to install Debian Linux in a VirtualBox VM” for a tutorial on how to build your own Debian VM.

Top command
One of the simpler Linux system monitoring tools, the top command comes with pretty much every flavor of Linux. This is the default display, but pressing the “z” key switches the display to color. Other hot keys and command line switches control things such as the display of summary and memory information (the second through fourth lines), sorting the list according to various criteria, killing tasks, and so on (you can find the complete list at here).

htop
Htop is a more sophisticated alternative to top. Wikipedia: “Users often deploy htop in cases where Unix top does not provide enough information about the systems processes, for example when trying to find minor memory leaks in applications. Htop is also popularly used interactively as a system monitor. Compared to top, it provides a more convenient, cursor-controlled interface for sending signals to processes.” (For more detail go here.)

Vmstat
Vmstat is a simpler tool for monitoring your Linux system performance statistics but that makes it highly suitable for use in shell scripts. Fire up your regex-fu and you can do some amazing things with vmstat and cron jobs. “The first report produced gives averages since the last reboot. Additional reports give information on a sampling period of length delay. The process and memory reports are instantaneous in either case” (go here for more info.).

ps
The ps command shows a list of running processes. In this case, I’ve used the “-e”switch to show everything, that is, all processes running (I’ve scrolled back to the top of the output otherwise the column names wouldn’t be visible). This command has a lot of switches that allow you to format the output as needed. Add a little of the aforementioned regex-fu and you’ve got a powerful tool. Go here for the full details.

Pstree
Pstree “shows running processes as a tree. The tree is rooted at either pid or init if pid is omitted. If a user name is specified, all process trees rooted at processes owned by that user are shown.”This is a really useful tool as the tree helps you sort out which process is dependent on which process (go here).

pmap
Understanding just how an app uses memory is often crucial in debugging, and the pmap produces just such information when given a process ID (PID). The screenshot shows the medium weight output generated by using the “-x”switch. You can get pmap to produce even more detailed information using the “-X”switch but you’ll need a much wider terminal window.

iostat
A crucial factor in your Linux system’s performance is processor and storage usage, which are what the iostat command reports on. As with the ps command, iostat has loads of switches that allow you to select the output format you need as well as sample performance over a time period and then repeat that sampling a number of times before reporting. See here.


MCTS Training, MCITP Trainnig

Best Microsoft MCTS Certification, Microsoft MCITP Training at certkingdom.com

7 steps to protect your business from cybercrime

As cybercriminals employ increasingly sophisticated tactics to steal identities and data, and the costs and consequences of data breaches skyrocket, here are seven steps that your small business should be taking to insulate themselves from cyberattacks.

Take a bite out of cybercrime
Today, the modern workplace is crammed with computing devices ranging from desktops to laptops to tablets to smartphones, and employees are expected to use computers in the course of their day, regardless of what line of work they’re in.

The computer’s pivotal role in the workforce also means that hackers are finding cybercrime to be more lucrative than ever. And as cybercriminals employ increasingly sophisticated means of stealing identities and data, there is no option but for small businesses to do more in order to protect themselves.

There’s no doubt that security has evolved substantially since the early days of the PC. Indeed, measures that may have been deemed excessive just a few years ago are now considered to be merely adequate. With this in mind, we outline seven steps to protect your small business below.

1 full disk encryption
A crucial first step towards protecting your data is to ensure that data is always encrypted at rest. Hard drives can be physically removed from a laptop or desktop and cloned in their entirety, by someone temporarily commandeering a laptop that has been left unattended in a hotel room, or an old laptop whose storage drive have not been properly scrubbed of data prior to being sold.

With the right forensic analysis tools, a cloned hard drive can yield a treasure trove of data, including passwords, browser history, downloaded email messages, chat logs and even old documents that may have been previously deleted.

It is therefore critical that full disk encryption technology is enabled so that all data on storage drives are scrambled. Windows users can use Microsoft’s BitLocker, which available free on the Pro version of Windows 8, or the Ultimate and Enterprise editions of Windows 7. Mac users can enable FileVault, which comes as part of the OS X operating system.

2 consider encrypted file vol
The use of full disk encryption ensures that all data written to the storage disk is scrambled by default, and gives businesses with an excellent baseline of protection where their data is concerned. However, organizations that deal with sensitive information may want to up the ante by creating a separate encrypted file volume for their most sensitive files.

This typically necessitates an additional step of having to first mount an encrypted volume prior to being able to use it, though using it with full disk encryption is as close to uncrackable as you can get.

On this front, TrueCrypt was one of the most popular software programs for creating encrypted file volumes before the project was abruptly closed down. Fortunately, the open source project lives on in the form of forks VeraCrypt and CipherShed, both of which are available on Windows, OS X and Linux. VeraCrypt was forked slightly earlier as part of an initiative to blunt the effects of increasingly powerful computers and their abilities to brute force an encrypted volume, while CipherShed was forked from the last version of TrueCrypt, or version 7.1a.

3 encrypt usb flash
USB flash drives are cheap and highly convenient devices to help users quickly transfer large files between computers. They’re also incredibly insecure, as their small size makes them vulnerable to being misplaced and/or stolen. Not only can careless handling of USB flash drives culminate in data leakage, but a casual analysis with off-the-shelf data recovery software will yield even previously deleted info.

One possible defense is to encrypt the data stored on your USB flash drive using the built-in capabilities of Windows or OS X. The downside is that this approach can be unintuitive to non-expert computer users, and won’t work when trying to transfer files between different platforms, or even between operating system versions that lack the support for it.

Alternatively, the use of a hardware-based encrypted USB flash drive offers a foolproof and convenient way for seamlessly encrypting data as it is being copied onto the drive. Some, like the Aegis Secure Key 3.0 Flash Drive, even eschew software authentication for physical buttons for authentication, offering a higher threshold of protection against spyware and keyloggers.

4 mind your cloud storage
While cloud storage services are going to great lengths to ensure the integrity and privacy of the data you store with them, they’re nevertheless a magnet for potential snooping by unscrupulous employees, compromise by elite hackers, or even secret court orders (depending on where the data is physically located).

This means that the safest measure is to either ditch public cloud storage services altogether, or to ensure that you upload only encrypted data. For the latter, a number of cloud services such as SpiderOak specialize in helping you ensure that only strongly encrypted data is uploaded into the cloud.

An alternative is to rely on a private cloud hosted on a network-attached storage device such as the Synology RS3614RPxs, or to explore peer-to-peer private synchronization such as BitTorrent Sync, where data is automatically replicated among privately-owned devices.

5 use a password manager
Not using a password manager results in users relying on mediocre passwords, as well as a significant increase in reusing those weak passwords across multiple websites or online services. This should be of particular concern, given how countless security breaches over the last few years have shown that most organizations simply do not store passwords with inadequate protection against brute force or social engineering.

For heightened security, some password managers also support the use of a physical fob in order to unlock their password database. This offers great convenience, and could limit the damage caused by spyware when authenticating via a onetime password (OTP).

6 enable multifactor authen

As its name suggests, multifactor authentication relies on an additional source of authenticating information before allowing you to login to a system. The most common secondary sources are probably a PIN code sent via text message, or through an app-generated code that changes with time. Multifactor authentication is available for many services today, including cloud storage services like Dropbox, and popular services like Google Apps.

Another popular multifactor authentication would be by use of a physical dongle that plugs in via an available USB port and emits an OTP code when tapped. When linked to a password manager service such as LastPass, the use of a security fob such as YubiKey can reduce the risks of accessing the password service on an untrusted machine, as well as offering protection from phishing attempts.

7 protecting your password reset
Finally, one often-overlooked area that has been successfully exploited by hackers in the past is the password reset mechanism found on almost all Web services. With the wealth of details published on our social networks, and many other salient personal details being a simple Google search away, it makes sense to review our “hint” questions and other information that could be used to reset our most important online accounts.

Unorthodox methods exist, too — such as when a hacker successfully social engineered his way into controlling an entire domain in order to intercept the password reset email address of a targeted account (see “4 Small Business Security Lessons from Real-Life Hacks.”) One way to thwart such an attack may be to register the email address on a prominent domain such as Gmail.com or Outlook.com as the backup email account registered to receive the password reset message.

Following these steps won’t make you invulnerable against hackers, but it should go a long way towards helping you secure your data from some of the most common cyberattacks we know about today.


MCTS Training, MCITP Trainnig

Best Microsoft MCTS Certification, Microsoft MCITP Training at certkingdom.com

 

 

Microsoft needs SDN for Azure cloud

Microsoft needs SDN for Azure cloud

Couldn’t scale without it, Azure CTO says
The Microsoft cloud, through which the company’s software products are delivered, has 22 hyper-scale regions around the world. Azure storage and compute usage is doubling every six months, and Azure lines up 90,000 new subscribers a month.

Six TED Talks that can change your career
Of the hundreds of TED talks available online, many are geared toward helping people view life in a new

Fifty-seven percent of the Fortune 500 use Azure and the number of hosts quickly grew from 100,000 to millions, said CTO Mark Russinovich during his Open Network Summit keynote address here this week. Azure needs a virtualized, partitioned and scale-out design, delivered through software, in order to keep up with that kind of growth.

“When we started to build these networks and started to see these types of requirements, the scale we were operating at, you can’t have humans provisioning things,” Russinovich said. “You’ve got to have systems that are very flexible and also delivering functionality very quickly. This meant we couldn’t go to the Web and do an Internet search for a scalable cloud controller that supports this kind of functionality. It just didn’t exist.”

Microsoft wrote all of the software code for Azure’s SDN. A description of it can be found here.
Microsoft uses virtual networks (Vnets) built from overlays and Network Functions Virtualization services running as software on commodity servers. Vnets are partitioned through Azure controllers established as a set of interconnected services, and each service is partitioned to scale and run protocols on multiple instances for high availability.

Controllers are established in regions where there could be 100,000 to 500,000 hosts. Within those regions are smaller clustered controllers which act as stateless caches for up to 1,000 hosts.
Related

Why is Microsoft killing off Internet Explorer?
Microsoft builds these controllers using an internally developed Service Fabric for Azure. Service Fabric has what Microsoft calls a microservices-based architecture that allows customers to update individual application components without having to update the entire application.

Microsoft makes the Azure Service Fabric SDK available here.
Much of the programmability of the Azure SDN is performed on the host server with hardware assist. A Virtual Filtering Platform (VFP) in Hyper-V hosts enable Azure’s data plane to act as a Hyper-V virtual network programmable switch for network agents that work on behalf of controllers for Vnet and other functions, like load balancing.

Packet processing is done at the host where a NIC with a Field Programmable Gate Array offloads network processing from the host CPU to scale the Azure data plane from 1Gbps to 40Gbps and beyond. That helps retain host CPU cycles for processing customer VMs, Microsoft says.

Remote Direct Memory Access is employed for the high-performance storage back-end to Azure.
Though SDNs and open source go hand-in-hand, there’s no open source software content in the Azure SDN. That’s because the functionality required for Azure was not offered through open source communities, Russinovich says.

“As these requirements were hitting us, there was no open source out there able to meet them,” he says. “And once you start on a path where you’re starting to build out infrastructure and system, even if there’s something else that comes along and addresses those requirements the switching cost is pretty huge. It’s not an aversion to it; it’s that we haven’t seen open source out there that really meets our needs, and there’s a switching cost that we have to take into account, which will slow us down.”

Microsoft is, however, considering contributing the Azure Service Fabric architecture to the open source community, Russinovich said. But there has to be some symbiosis.

“What’s secret sauce, what’s not; what’s the cost of contributing to open source, what’s the benefit to customers of open source, what’s the benefit to us penetrating markets,” he says. “It’s a constant evaluation.”

Some of the challenges in constructing the Azure SDN were retrofitting existing controllers into the Service Fabric, Russinovich says. That resulted in some scaling issues.
Resources

7 Critical Questions to Demystify DRaaS
“Some of the original controllers were written not using Service Fabric so they were not microservice oriented,” he says. “We immediately started to run into scale challenges with that. Existing ones are being (rewritten) onto Service Fabric.

“Another one is this evolution of the VFP and how it does packet processing. That is not something that we sat down initially and said, ‘it’s connections, not flows.’ We need to make sure that packet processing on every packet after the connection is set up needs to be highly efficient. It’s been the challenge of being able to operate efficiently, scale it up quickly, being able to deliver features into it quickly, and being able to take the load off the server so we can run VMs on it.”

What’s next for the Azure SDN? Preparing for more explosive growth of the Microsoft cloud, Russinovich says.

“It’s a constant evolution in terms of functionality and features,” he says. “You’re going to see us get more richer and powerful abstractions at the network level from a customer API perspective. We’re going to see 10X scale in a few years.”

MCTS Training, MCITP Trainnig

Best Microsoft MCTS Certification, Microsoft MCITP Training at certkingdom.com

Preparing for your Windows Server upgrade

It’s time to say goodbye to Windows Server 2003. Getting through the migration requires not just Windows expertise, but knowledge of your app portfolio

This vendor-written tech primer has been edited by Network World to eliminate product promotion, but readers should note it will likely favor the submitter’s approach.

f you’ve been clinging to Windows Server 2003 trying to ignore the fact that Microsoft will officially end support July 14, 2015, you’re playing with fire. One the updates stop, you’ll be exposed to troubling security and compliance risks. Take note that in 2013 alone, 37 updates were issued by Microsoft for Windows Server 2003/R2.

Yet upgrading servers is a resource challenge as well as a mindset issue. The top barrier for migration, according to a survey, is the belief that existing systems are working just fine, and many users worry about software incompatibility.

The actual migration process to Windows Server 2008 or 2012 (the likely choices) is straightforward and well-documented, and most Windows engineers can easily learn how to work in a new OS. The complexity lies in determining if and how business applications will successfully transition to the new platform, and which ones will need to be replaced or shuttered.

Some IT shops will find they simply don’t have time to undergo this rigorous process. External service providers can help. Even if you have a sizable IT staff, you’ll need to consider whether it’s a worthwhile use of a senior engineer’s time to work on server migrations, compared with other high-priority projects. Regardless of your approach – internally or externally managed – here are some steps for working through a successful move away from Windows Server 2003.

1. It is often surprising what midsize and large companies don’t know about their internal IT systems. It’s critical to identify how many servers you have, where they’re located, and what OS and applications they’re running. That gives insight into how many servers and which applications are at risk. Asset management software can help by updating this information continually, saving crucial time in the analysis. Don’t forget to document what security systems are in place on servers, networks and applications.

2. It’s important to work closely with business unit heads to communicate why and when the migration is happening and any expected changes to their applications. Determine what IT specialists you need (including database and application managers) and if you can free them up for the migration or if you’ll need outside help.

3. Most companies will likely opt for Windows Server 2012, simply because it will last longer and it’s the latest version. Yet whether this is feasible or not depends upon your applications. If a critical application or two aren’t compatible with or don’t have a near-term upgrade path to your desired OS, you’ve got the decision to replace it or retire it. Work closely with application vendors to understand if and when they will issue an updated version, keeping in mind that promises don’t always pan out.

An application might also require running on a 32-bit version of the software. While both 2008 and 2012 offer 32-bit versions, this will cut performance. We’ve seen at least one case in which a company had to undergo two upgrades for a particular application – from 2003 to 2008 and finally to 2012 because the application vendor was not ready for 2012. Knowing these factors ahead of time makes all the difference as you plan for migration.

4. A positive outcome of being forced into migration (other than getting a better and faster OS) is that it’s the perfect time to push for a change in strategy. Most IT organizations will need to replace their hardware to install 2008 or 2012, yet there’s also the question of whether your company should continue owning equipment at all. Companies of all sizes and sectors are looking harder at hosted and cloud environments, which reduces daily IT support for standard processes such as server maintenance. For those companies still worried about security and compliance, a co-location arrangement at a nearby data center can reduce some of the risk and cost of maintaining hardware on site. Managed services allows your staff to focus on initiatives that add real value to the business, rather than maintaining systems.

5. For a midsize to large company with dozens of servers and hundreds of applications, sorting out a migration plan can be overwhelming. Here’s a simple way to look at it. First, you’ll want to move any customer facing apps and public websites, since they present the greatest potential damage to your business if impaired or hacked. Next, begin the process of migrating applications with compatibility problems and which require customization or upgrades, as they’ll take the longest time to prepare. In parallel, migrate the easy to move applications. These are the ones which are already primed to run on an upgraded operating system or can be upgraded quickly.

Technically, this is a straightforward process once you tackle all the previous challenges. However, server migration is not just a technical project. You’ll need people to help with coordination and communication with the business, project management and support. You’ll of course want to test the applications on the new servers before retiring the old ones. Backups are absolutely critical.

What if, despite your best efforts, you find yourself in no man’s land, past the deadline, and your environment is still not fully transitioned to the new server platform? To mitigate security and reliability risks, ensure that all applications which are exposed to the Internet are fully encrypted and that all servers are also locked down. You’ll need to invest more time monitoring applications that remain on 2003, watching for potential breaches or suspicious behavior.

If you’ve not already started on a Windows Server 2003 migration plan, don’t wait another minute, but don’t panic either. There’s a world of experienced consultants and providers out there ready to help you complete a successful upgrade and keep your business running smoothly.


MCTS Training, MCITP Trainnig

Best Microsoft MCTS Certification, Microsoft MCITP Training at certkingdom.com

Patch Tuesday June 2015: 4 of Microsoft’s 8 patches close remote code execution holes

Microsoft released eight security bulletins, two rated critical, but four address remote code execution vulnerabilities that an attacker could exploit to take control of a victim’s machine.

For June 2015 “Update Tuesday,” Microsoft released 8 security bulletins; only two the security updates are rated critical for resolving remote code execution (RCE) flaws, but two patches rated important also address RCE vulnerabilities.

Rated as Critical
MS15-056 is a cumulative security update for Internet Explorer, which fixes 24 vulnerabilities. Qualys CTO Wolfgang Kandek added, “This includes 20 critical flaws that can lead to RCE which an attacker would trigger through a malicious webpage. All versions of IE and Windows are affected. Patch this first and fast.”

Microsoft said the patch resolves vulnerabilities by “preventing browser histories from being accessed by a malicious site; adding additional permission validations to Internet Explorer; and modifying how Internet Explorer handles objects in memory.”

MS15-057 fixes a hole in Windows that could allow remote code execution if Windows Media Player opens specially crafted media content that is hosted on a malicious site. An attacker could exploit this vulnerability to “take complete control of an affected system remotely.”

Rated as Important
MS15-058 is not listed other than a placeholder, but MS15-059 and MS15-060 both address remote code execution flaws.

MS15-059 addresses RCE vulnerabilities in Microsoft Office. Although it’s rated important for Microsoft Office 2010 and 2013, Microsoft Office Compatibility Pack Service Pack 3 and Microsoft Office 2013 RT, Kandek said it should be your second patching priority. If an attacker could convince a user to open a malicious file with Word or any other Office tool, then he or she could take control of a user’s machine. “The fact that one can achieve RCE, plus the ease with which an attacker can convince the target to open an attached file through social engineering, make this a high-risk vulnerability.”

MS15-060 resolves a vulnerability in Microsoft Windows “common controls.” The vulnerability “could allow remote code execution if a user clicks a specially crafted link, or a link to specially crafted content, and then invokes F12 Developer Tools in Internet Explorer.” Kandek explained, “MS15-060 is a vulnerability in the common controls of Windows which is accessible through Internet Explorer Developer Menu. An attack needs to trigger this menu to be successful. Turning off developer mode in Internet Explorer (why is it on by default?) is a listed work-around and is a good defense in depth measure that you should take a look at for your machines.”

The last four patches Microsoft issued address elevation of privilege vulnerabilities.

MS15-061 resolves bugs in Microsoft Windows kernel-mode drivers. “The most severe of these vulnerabilities could allow elevation of privilege if an attacker logs on to the system and runs a specially crafted application. An attacker could then install programs; view, change, or delete data; or create new accounts with full user rights.”

MS15-062 addresses a security hole in Microsoft Active Directory Federal Services. Microsoft said, “The vulnerability could allow elevation of privilege if an attacker submits a specially crafted URL to a target site. Due to the vulnerability, in specific situations specially crafted script is not properly sanitized, which subsequently could lead to an attacker-supplied script being run in the security context of a user who views the malicious content. For cross-site scripting attacks, this vulnerability requires that a user be visiting a compromised site for any malicious action to occur.”

MS15-063 is another patch for Windows kernel that could allow EoP “if an attacker places a malicious .dll file in a local directory on the machine or on a network share. An attacker would then have to wait for a user to run a program that can load a malicious .dll file, resulting in elevation of privilege. However, in all cases an attacker would have no way to force a user to visit such a network share or website.”

MS15-064 resolves vulnerabilities in Microsoft Exchange Server by “modifying how Exchange web applications manage same-origin policy; modifying how Exchange web applications manage user session authentication; and correcting how Exchange web applications sanitize HTML strings.”

It would be wise to patch Adobe Flash while you are at it as four of 13 vulnerabilities patched are rated critical.

Happy patching!


MCTS Training, MCITP Trainnig

Best Microsoft MCTS Certification, Microsoft MCITP Training at certkingdom.com

Security Is a Prisoner of the Network

Cybersecurity professionals must gain experience and get comfortable with virtual network security

I have a very distinct memory about a conversation I had with a colleague in the mid-to-late 1990s about how NetWare worked. I told him that file and print services resided “in the network” but he couldn’t get his arms around this concept. He continually pushed back by saying things like, “well the printers and file servers have to be plugged into the network so isn’t NetWare just running on these devices?”

His assumption was somewhat accurate since NetWare did control physical file servers and printers. What he didn’t get however was that NetWare made physical network devices subservient to a global and virtual file and print services. Before NetWare (and similar technologies like Sun’s NFS), you had to have a physical connection to a device and/or control these connections on a device-by-device basis. Novell radically changed this by using software to abstract connections. This made it much easier to point users at local printers and file shares while applying central access controls for security and privacy.

Why am I strolling down memory LAN (author’s note: I am pretty proud of this pun)? Because we face a similar changing situation today with regard to network security and cloud computing. I contend that security has been a prisoner of the network over the past 20 years.

During this timeframe, large organizations deployed an army of network security devices to filter or at least inspect IP packets for security purposes. As organizations added more servers and more network traffic, they were forced to add more network security devices. This required a series of unnatural acts like moving traffic to and fro so it could pass by various security checkpoints. Security and network engineers also created security zones with physical and virtual network segmentation, and employed teams of people to create and manage ACLs, firewalls, WAFs, etc.

Not surprisingly, network security has become incredibly complex, cumbersome, and fragile as a result of layers upon layers of network imprisonment. It now takes a heroic effort from cybersecurity and network operations team to keep up with these challenges.

Fast forward to 2015 and there is a radical change occurring. IT initiatives like server virtualization, cloud computing, NFV, and SDN are game changers poised to break the tight coupling between cybersecurity and the network.

Now this breakup is still in its early stages and like the song says: Breaking up is hard to do. For example, ESG research reveals that 60% of organizations say they are still learning how to apply network security policies (and policy enforcement) to public/private cloud infrastructure. Furthermore, 60% of organizations say that their network security operations and processes lack the right level of automation and orchestration necessary for public/private cloud computing infrastructure (note: I am an ESG employee).

As painful as this separation is today, CISOs and network engineers must understand that there may be a network security rainbow on the horizon. Just as NetWare turned file and print into a productive and operationally-efficient virtual network service, there are a number of technology trends and innovations that could enable CISOs to virtualize and distribute network security services across the entire network. For example:

Foundational technologies like SDN, NFV, Cisco ACI and VMware NSX.
Cloud security monitoring tools from HyTrust, ThreatConnect, and SkyHigh Networks as well as cloud connectors for ArcSight, QRadar, RSA, and Splunk.

NetWare-like network security services software from CloudPassage, Illumio, and vArmour.

Network security orchestration tools from firms like RedSeal and Tufin.
Virtual editions of leading physical network security products from vendors like Check Point, Fortinet, Juniper, and Palo Alto Networks.

A few years ago, VMware declared that organizations could actually improve their cybersecurity positions by embracing server virtualization. While this seemed like blasphemy at the time, VMware was absolutely right. And the addition of the technologies and trends I mention above makes this statement even more possible. In order to get there however, CIOs, CISOs, and networking professionals have to think differently. Rather than try to emulate physical network security in the cloud, cybersecurity and networking staff must embrace virtual network security services, learn how to use them, and understand how to use them to improve security efficacy and operational efficiency.

Back in the 1990s, NetWare transformed file and print services and introduced an army of skilled IT professionals with CNE certifications. Over the next few years, we will see a similar revolution as security sheds its physical network shackles and assumes a role of virtual network services.


MCTS Training, MCITP Trainnig

Best Microsoft MCTS Certification, Microsoft MCITP Training at certkingdom.com