7 command line tools for monitoring your Linux system

Here is a selection of basic command line tools that will make your exploration and optimization in Linux easier.

Dive on in
One of the great things about Linux is how deeply you can dive into the system to explore how it works and to look for opportunities to fine tune performance or diagnose problems. Here is a selection of basic command line tools that will make your exploration and optimization easier. Most of these commands are already built into your Linux system, but in case they aren’t, just Google “install”, the command name, and the name of your distro and you’ll find which package needs installing (note that some commands are bundled with other commands in a package that has a different name from the one you’re looking for). If you have any other tools you use, let me know for our next Linux Tools roundup.

How we did it
FYI: The screenshots in this collection were created on Debian Linux 8.1 (“Jessie”) running in a virtual machine under Oracle VirtualBox 4.3.28 under OS X 10.10.3 (“Yosemite”). See my next slideshow “How to install Debian Linux in a VirtualBox VM” for a tutorial on how to build your own Debian VM.

Top command
One of the simpler Linux system monitoring tools, the top command comes with pretty much every flavor of Linux. This is the default display, but pressing the “z” key switches the display to color. Other hot keys and command line switches control things such as the display of summary and memory information (the second through fourth lines), sorting the list according to various criteria, killing tasks, and so on (you can find the complete list at here).

Htop is a more sophisticated alternative to top. Wikipedia: “Users often deploy htop in cases where Unix top does not provide enough information about the systems processes, for example when trying to find minor memory leaks in applications. Htop is also popularly used interactively as a system monitor. Compared to top, it provides a more convenient, cursor-controlled interface for sending signals to processes.” (For more detail go here.)

Vmstat is a simpler tool for monitoring your Linux system performance statistics but that makes it highly suitable for use in shell scripts. Fire up your regex-fu and you can do some amazing things with vmstat and cron jobs. “The first report produced gives averages since the last reboot. Additional reports give information on a sampling period of length delay. The process and memory reports are instantaneous in either case” (go here for more info.).

The ps command shows a list of running processes. In this case, I’ve used the “-e”switch to show everything, that is, all processes running (I’ve scrolled back to the top of the output otherwise the column names wouldn’t be visible). This command has a lot of switches that allow you to format the output as needed. Add a little of the aforementioned regex-fu and you’ve got a powerful tool. Go here for the full details.

Pstree “shows running processes as a tree. The tree is rooted at either pid or init if pid is omitted. If a user name is specified, all process trees rooted at processes owned by that user are shown.”This is a really useful tool as the tree helps you sort out which process is dependent on which process (go here).

Understanding just how an app uses memory is often crucial in debugging, and the pmap produces just such information when given a process ID (PID). The screenshot shows the medium weight output generated by using the “-x”switch. You can get pmap to produce even more detailed information using the “-X”switch but you’ll need a much wider terminal window.

A crucial factor in your Linux system’s performance is processor and storage usage, which are what the iostat command reports on. As with the ps command, iostat has loads of switches that allow you to select the output format you need as well as sample performance over a time period and then repeat that sampling a number of times before reporting. See here.

MCTS Training, MCITP Trainnig

Best Microsoft MCTS Certification, Microsoft MCITP Training at certkingdom.com

7 steps to protect your business from cybercrime

As cybercriminals employ increasingly sophisticated tactics to steal identities and data, and the costs and consequences of data breaches skyrocket, here are seven steps that your small business should be taking to insulate themselves from cyberattacks.

Take a bite out of cybercrime
Today, the modern workplace is crammed with computing devices ranging from desktops to laptops to tablets to smartphones, and employees are expected to use computers in the course of their day, regardless of what line of work they’re in.

The computer’s pivotal role in the workforce also means that hackers are finding cybercrime to be more lucrative than ever. And as cybercriminals employ increasingly sophisticated means of stealing identities and data, there is no option but for small businesses to do more in order to protect themselves.

There’s no doubt that security has evolved substantially since the early days of the PC. Indeed, measures that may have been deemed excessive just a few years ago are now considered to be merely adequate. With this in mind, we outline seven steps to protect your small business below.

1 full disk encryption
A crucial first step towards protecting your data is to ensure that data is always encrypted at rest. Hard drives can be physically removed from a laptop or desktop and cloned in their entirety, by someone temporarily commandeering a laptop that has been left unattended in a hotel room, or an old laptop whose storage drive have not been properly scrubbed of data prior to being sold.

With the right forensic analysis tools, a cloned hard drive can yield a treasure trove of data, including passwords, browser history, downloaded email messages, chat logs and even old documents that may have been previously deleted.

It is therefore critical that full disk encryption technology is enabled so that all data on storage drives are scrambled. Windows users can use Microsoft’s BitLocker, which available free on the Pro version of Windows 8, or the Ultimate and Enterprise editions of Windows 7. Mac users can enable FileVault, which comes as part of the OS X operating system.

2 consider encrypted file vol
The use of full disk encryption ensures that all data written to the storage disk is scrambled by default, and gives businesses with an excellent baseline of protection where their data is concerned. However, organizations that deal with sensitive information may want to up the ante by creating a separate encrypted file volume for their most sensitive files.

This typically necessitates an additional step of having to first mount an encrypted volume prior to being able to use it, though using it with full disk encryption is as close to uncrackable as you can get.

On this front, TrueCrypt was one of the most popular software programs for creating encrypted file volumes before the project was abruptly closed down. Fortunately, the open source project lives on in the form of forks VeraCrypt and CipherShed, both of which are available on Windows, OS X and Linux. VeraCrypt was forked slightly earlier as part of an initiative to blunt the effects of increasingly powerful computers and their abilities to brute force an encrypted volume, while CipherShed was forked from the last version of TrueCrypt, or version 7.1a.

3 encrypt usb flash
USB flash drives are cheap and highly convenient devices to help users quickly transfer large files between computers. They’re also incredibly insecure, as their small size makes them vulnerable to being misplaced and/or stolen. Not only can careless handling of USB flash drives culminate in data leakage, but a casual analysis with off-the-shelf data recovery software will yield even previously deleted info.

One possible defense is to encrypt the data stored on your USB flash drive using the built-in capabilities of Windows or OS X. The downside is that this approach can be unintuitive to non-expert computer users, and won’t work when trying to transfer files between different platforms, or even between operating system versions that lack the support for it.

Alternatively, the use of a hardware-based encrypted USB flash drive offers a foolproof and convenient way for seamlessly encrypting data as it is being copied onto the drive. Some, like the Aegis Secure Key 3.0 Flash Drive, even eschew software authentication for physical buttons for authentication, offering a higher threshold of protection against spyware and keyloggers.

4 mind your cloud storage
While cloud storage services are going to great lengths to ensure the integrity and privacy of the data you store with them, they’re nevertheless a magnet for potential snooping by unscrupulous employees, compromise by elite hackers, or even secret court orders (depending on where the data is physically located).

This means that the safest measure is to either ditch public cloud storage services altogether, or to ensure that you upload only encrypted data. For the latter, a number of cloud services such as SpiderOak specialize in helping you ensure that only strongly encrypted data is uploaded into the cloud.

An alternative is to rely on a private cloud hosted on a network-attached storage device such as the Synology RS3614RPxs, or to explore peer-to-peer private synchronization such as BitTorrent Sync, where data is automatically replicated among privately-owned devices.

5 use a password manager
Not using a password manager results in users relying on mediocre passwords, as well as a significant increase in reusing those weak passwords across multiple websites or online services. This should be of particular concern, given how countless security breaches over the last few years have shown that most organizations simply do not store passwords with inadequate protection against brute force or social engineering.

For heightened security, some password managers also support the use of a physical fob in order to unlock their password database. This offers great convenience, and could limit the damage caused by spyware when authenticating via a onetime password (OTP).

6 enable multifactor authen

As its name suggests, multifactor authentication relies on an additional source of authenticating information before allowing you to login to a system. The most common secondary sources are probably a PIN code sent via text message, or through an app-generated code that changes with time. Multifactor authentication is available for many services today, including cloud storage services like Dropbox, and popular services like Google Apps.

Another popular multifactor authentication would be by use of a physical dongle that plugs in via an available USB port and emits an OTP code when tapped. When linked to a password manager service such as LastPass, the use of a security fob such as YubiKey can reduce the risks of accessing the password service on an untrusted machine, as well as offering protection from phishing attempts.

7 protecting your password reset
Finally, one often-overlooked area that has been successfully exploited by hackers in the past is the password reset mechanism found on almost all Web services. With the wealth of details published on our social networks, and many other salient personal details being a simple Google search away, it makes sense to review our “hint” questions and other information that could be used to reset our most important online accounts.

Unorthodox methods exist, too — such as when a hacker successfully social engineered his way into controlling an entire domain in order to intercept the password reset email address of a targeted account (see “4 Small Business Security Lessons from Real-Life Hacks.”) One way to thwart such an attack may be to register the email address on a prominent domain such as Gmail.com or Outlook.com as the backup email account registered to receive the password reset message.

Following these steps won’t make you invulnerable against hackers, but it should go a long way towards helping you secure your data from some of the most common cyberattacks we know about today.

MCTS Training, MCITP Trainnig

Best Microsoft MCTS Certification, Microsoft MCITP Training at certkingdom.com



Google looks for content makers to test its Jump VR video camera

Google may have a strong interest in applicants who have creative backgrounds, like film making and directing

If you’re an aspiring virtual reality content producer, Google wants to give you a chance to test the Jump camera system it developed for recording video to be used in VR environments.

People interested in trying their hand at capturing 360-degree video with Jump can fill out a form Google posted on Monday that asks basic biographical questions as well as details on how they would use the system.

Google didn’t say how many “select creators” it would chose, but those who are picked will be able to start using the 16-camera rig this summer.

Google seems especially interested in people with creative backgrounds. The jobs that people can select in the form’s occupation section include filmmaker, director, artist and production staff — but there is an “other” section that allows write-ins if none of the above apply.

There’s also a section where applicants can explain why they want to test Jump — and “awesome answers might put you at the top of the list,” Google said.

Google worked with GoPro to build Jump, which has 16 of the company’s Hero4 cameras attached to a circular frame. Jump’s price and availability weren’t provided when the rig was shown at Google’s I/O developer’s conference in May. However, given that a Hero4 camera retails for approximately US$500, initial Jump buyers will likely have deep pockets.

The first videos created with Jump will appear on YouTube this summer, Google said at I/O. People will be able to experience them via the Google Cardboard viewer.


MCTS Training, MCITP Trainnig

Best Microsoft MCTS Certification, Microsoft MCITP Training at certkingdom.com

Microsoft needs SDN for Azure cloud

Microsoft needs SDN for Azure cloud

Couldn’t scale without it, Azure CTO says
The Microsoft cloud, through which the company’s software products are delivered, has 22 hyper-scale regions around the world. Azure storage and compute usage is doubling every six months, and Azure lines up 90,000 new subscribers a month.

Six TED Talks that can change your career
Of the hundreds of TED talks available online, many are geared toward helping people view life in a new

Fifty-seven percent of the Fortune 500 use Azure and the number of hosts quickly grew from 100,000 to millions, said CTO Mark Russinovich during his Open Network Summit keynote address here this week. Azure needs a virtualized, partitioned and scale-out design, delivered through software, in order to keep up with that kind of growth.

“When we started to build these networks and started to see these types of requirements, the scale we were operating at, you can’t have humans provisioning things,” Russinovich said. “You’ve got to have systems that are very flexible and also delivering functionality very quickly. This meant we couldn’t go to the Web and do an Internet search for a scalable cloud controller that supports this kind of functionality. It just didn’t exist.”

Microsoft wrote all of the software code for Azure’s SDN. A description of it can be found here.
Microsoft uses virtual networks (Vnets) built from overlays and Network Functions Virtualization services running as software on commodity servers. Vnets are partitioned through Azure controllers established as a set of interconnected services, and each service is partitioned to scale and run protocols on multiple instances for high availability.

Controllers are established in regions where there could be 100,000 to 500,000 hosts. Within those regions are smaller clustered controllers which act as stateless caches for up to 1,000 hosts.

Why is Microsoft killing off Internet Explorer?
Microsoft builds these controllers using an internally developed Service Fabric for Azure. Service Fabric has what Microsoft calls a microservices-based architecture that allows customers to update individual application components without having to update the entire application.

Microsoft makes the Azure Service Fabric SDK available here.
Much of the programmability of the Azure SDN is performed on the host server with hardware assist. A Virtual Filtering Platform (VFP) in Hyper-V hosts enable Azure’s data plane to act as a Hyper-V virtual network programmable switch for network agents that work on behalf of controllers for Vnet and other functions, like load balancing.

Packet processing is done at the host where a NIC with a Field Programmable Gate Array offloads network processing from the host CPU to scale the Azure data plane from 1Gbps to 40Gbps and beyond. That helps retain host CPU cycles for processing customer VMs, Microsoft says.

Remote Direct Memory Access is employed for the high-performance storage back-end to Azure.
Though SDNs and open source go hand-in-hand, there’s no open source software content in the Azure SDN. That’s because the functionality required for Azure was not offered through open source communities, Russinovich says.

“As these requirements were hitting us, there was no open source out there able to meet them,” he says. “And once you start on a path where you’re starting to build out infrastructure and system, even if there’s something else that comes along and addresses those requirements the switching cost is pretty huge. It’s not an aversion to it; it’s that we haven’t seen open source out there that really meets our needs, and there’s a switching cost that we have to take into account, which will slow us down.”

Microsoft is, however, considering contributing the Azure Service Fabric architecture to the open source community, Russinovich said. But there has to be some symbiosis.

“What’s secret sauce, what’s not; what’s the cost of contributing to open source, what’s the benefit to customers of open source, what’s the benefit to us penetrating markets,” he says. “It’s a constant evaluation.”

Some of the challenges in constructing the Azure SDN were retrofitting existing controllers into the Service Fabric, Russinovich says. That resulted in some scaling issues.

7 Critical Questions to Demystify DRaaS
“Some of the original controllers were written not using Service Fabric so they were not microservice oriented,” he says. “We immediately started to run into scale challenges with that. Existing ones are being (rewritten) onto Service Fabric.

“Another one is this evolution of the VFP and how it does packet processing. That is not something that we sat down initially and said, ‘it’s connections, not flows.’ We need to make sure that packet processing on every packet after the connection is set up needs to be highly efficient. It’s been the challenge of being able to operate efficiently, scale it up quickly, being able to deliver features into it quickly, and being able to take the load off the server so we can run VMs on it.”

What’s next for the Azure SDN? Preparing for more explosive growth of the Microsoft cloud, Russinovich says.

“It’s a constant evolution in terms of functionality and features,” he says. “You’re going to see us get more richer and powerful abstractions at the network level from a customer API perspective. We’re going to see 10X scale in a few years.”

MCTS Training, MCITP Trainnig

Best Microsoft MCTS Certification, Microsoft MCITP Training at certkingdom.com

Preparing for your Windows Server upgrade

It’s time to say goodbye to Windows Server 2003. Getting through the migration requires not just Windows expertise, but knowledge of your app portfolio

This vendor-written tech primer has been edited by Network World to eliminate product promotion, but readers should note it will likely favor the submitter’s approach.

f you’ve been clinging to Windows Server 2003 trying to ignore the fact that Microsoft will officially end support July 14, 2015, you’re playing with fire. One the updates stop, you’ll be exposed to troubling security and compliance risks. Take note that in 2013 alone, 37 updates were issued by Microsoft for Windows Server 2003/R2.

Yet upgrading servers is a resource challenge as well as a mindset issue. The top barrier for migration, according to a survey, is the belief that existing systems are working just fine, and many users worry about software incompatibility.

The actual migration process to Windows Server 2008 or 2012 (the likely choices) is straightforward and well-documented, and most Windows engineers can easily learn how to work in a new OS. The complexity lies in determining if and how business applications will successfully transition to the new platform, and which ones will need to be replaced or shuttered.

Some IT shops will find they simply don’t have time to undergo this rigorous process. External service providers can help. Even if you have a sizable IT staff, you’ll need to consider whether it’s a worthwhile use of a senior engineer’s time to work on server migrations, compared with other high-priority projects. Regardless of your approach – internally or externally managed – here are some steps for working through a successful move away from Windows Server 2003.

1. It is often surprising what midsize and large companies don’t know about their internal IT systems. It’s critical to identify how many servers you have, where they’re located, and what OS and applications they’re running. That gives insight into how many servers and which applications are at risk. Asset management software can help by updating this information continually, saving crucial time in the analysis. Don’t forget to document what security systems are in place on servers, networks and applications.

2. It’s important to work closely with business unit heads to communicate why and when the migration is happening and any expected changes to their applications. Determine what IT specialists you need (including database and application managers) and if you can free them up for the migration or if you’ll need outside help.

3. Most companies will likely opt for Windows Server 2012, simply because it will last longer and it’s the latest version. Yet whether this is feasible or not depends upon your applications. If a critical application or two aren’t compatible with or don’t have a near-term upgrade path to your desired OS, you’ve got the decision to replace it or retire it. Work closely with application vendors to understand if and when they will issue an updated version, keeping in mind that promises don’t always pan out.

An application might also require running on a 32-bit version of the software. While both 2008 and 2012 offer 32-bit versions, this will cut performance. We’ve seen at least one case in which a company had to undergo two upgrades for a particular application – from 2003 to 2008 and finally to 2012 because the application vendor was not ready for 2012. Knowing these factors ahead of time makes all the difference as you plan for migration.

4. A positive outcome of being forced into migration (other than getting a better and faster OS) is that it’s the perfect time to push for a change in strategy. Most IT organizations will need to replace their hardware to install 2008 or 2012, yet there’s also the question of whether your company should continue owning equipment at all. Companies of all sizes and sectors are looking harder at hosted and cloud environments, which reduces daily IT support for standard processes such as server maintenance. For those companies still worried about security and compliance, a co-location arrangement at a nearby data center can reduce some of the risk and cost of maintaining hardware on site. Managed services allows your staff to focus on initiatives that add real value to the business, rather than maintaining systems.

5. For a midsize to large company with dozens of servers and hundreds of applications, sorting out a migration plan can be overwhelming. Here’s a simple way to look at it. First, you’ll want to move any customer facing apps and public websites, since they present the greatest potential damage to your business if impaired or hacked. Next, begin the process of migrating applications with compatibility problems and which require customization or upgrades, as they’ll take the longest time to prepare. In parallel, migrate the easy to move applications. These are the ones which are already primed to run on an upgraded operating system or can be upgraded quickly.

Technically, this is a straightforward process once you tackle all the previous challenges. However, server migration is not just a technical project. You’ll need people to help with coordination and communication with the business, project management and support. You’ll of course want to test the applications on the new servers before retiring the old ones. Backups are absolutely critical.

What if, despite your best efforts, you find yourself in no man’s land, past the deadline, and your environment is still not fully transitioned to the new server platform? To mitigate security and reliability risks, ensure that all applications which are exposed to the Internet are fully encrypted and that all servers are also locked down. You’ll need to invest more time monitoring applications that remain on 2003, watching for potential breaches or suspicious behavior.

If you’ve not already started on a Windows Server 2003 migration plan, don’t wait another minute, but don’t panic either. There’s a world of experienced consultants and providers out there ready to help you complete a successful upgrade and keep your business running smoothly.

MCTS Training, MCITP Trainnig

Best Microsoft MCTS Certification, Microsoft MCITP Training at certkingdom.com

Patch Tuesday June 2015: 4 of Microsoft’s 8 patches close remote code execution holes

Microsoft released eight security bulletins, two rated critical, but four address remote code execution vulnerabilities that an attacker could exploit to take control of a victim’s machine.

For June 2015 “Update Tuesday,” Microsoft released 8 security bulletins; only two the security updates are rated critical for resolving remote code execution (RCE) flaws, but two patches rated important also address RCE vulnerabilities.

Rated as Critical
MS15-056 is a cumulative security update for Internet Explorer, which fixes 24 vulnerabilities. Qualys CTO Wolfgang Kandek added, “This includes 20 critical flaws that can lead to RCE which an attacker would trigger through a malicious webpage. All versions of IE and Windows are affected. Patch this first and fast.”

Microsoft said the patch resolves vulnerabilities by “preventing browser histories from being accessed by a malicious site; adding additional permission validations to Internet Explorer; and modifying how Internet Explorer handles objects in memory.”

MS15-057 fixes a hole in Windows that could allow remote code execution if Windows Media Player opens specially crafted media content that is hosted on a malicious site. An attacker could exploit this vulnerability to “take complete control of an affected system remotely.”

Rated as Important
MS15-058 is not listed other than a placeholder, but MS15-059 and MS15-060 both address remote code execution flaws.

MS15-059 addresses RCE vulnerabilities in Microsoft Office. Although it’s rated important for Microsoft Office 2010 and 2013, Microsoft Office Compatibility Pack Service Pack 3 and Microsoft Office 2013 RT, Kandek said it should be your second patching priority. If an attacker could convince a user to open a malicious file with Word or any other Office tool, then he or she could take control of a user’s machine. “The fact that one can achieve RCE, plus the ease with which an attacker can convince the target to open an attached file through social engineering, make this a high-risk vulnerability.”

MS15-060 resolves a vulnerability in Microsoft Windows “common controls.” The vulnerability “could allow remote code execution if a user clicks a specially crafted link, or a link to specially crafted content, and then invokes F12 Developer Tools in Internet Explorer.” Kandek explained, “MS15-060 is a vulnerability in the common controls of Windows which is accessible through Internet Explorer Developer Menu. An attack needs to trigger this menu to be successful. Turning off developer mode in Internet Explorer (why is it on by default?) is a listed work-around and is a good defense in depth measure that you should take a look at for your machines.”

The last four patches Microsoft issued address elevation of privilege vulnerabilities.

MS15-061 resolves bugs in Microsoft Windows kernel-mode drivers. “The most severe of these vulnerabilities could allow elevation of privilege if an attacker logs on to the system and runs a specially crafted application. An attacker could then install programs; view, change, or delete data; or create new accounts with full user rights.”

MS15-062 addresses a security hole in Microsoft Active Directory Federal Services. Microsoft said, “The vulnerability could allow elevation of privilege if an attacker submits a specially crafted URL to a target site. Due to the vulnerability, in specific situations specially crafted script is not properly sanitized, which subsequently could lead to an attacker-supplied script being run in the security context of a user who views the malicious content. For cross-site scripting attacks, this vulnerability requires that a user be visiting a compromised site for any malicious action to occur.”

MS15-063 is another patch for Windows kernel that could allow EoP “if an attacker places a malicious .dll file in a local directory on the machine or on a network share. An attacker would then have to wait for a user to run a program that can load a malicious .dll file, resulting in elevation of privilege. However, in all cases an attacker would have no way to force a user to visit such a network share or website.”

MS15-064 resolves vulnerabilities in Microsoft Exchange Server by “modifying how Exchange web applications manage same-origin policy; modifying how Exchange web applications manage user session authentication; and correcting how Exchange web applications sanitize HTML strings.”

It would be wise to patch Adobe Flash while you are at it as four of 13 vulnerabilities patched are rated critical.

Happy patching!

MCTS Training, MCITP Trainnig

Best Microsoft MCTS Certification, Microsoft MCITP Training at certkingdom.com

Apple shows iOS 9’s major upgrades, from multitasking to picture-in-picture

Apple shows iOS 9’s major upgrades, from multitasking to picture-in-picture

Side-by-side apps, video overlays, and much more are coming to iPads when Apple’s mobile OS releases this fall.

Major changes are coming to our iPads, from the way we select text, to the way we interact with our favorite apps and play videos.

Speaking at Apple’s Worldwide Developer Conference on Monday, Senior Vice President Craig Federighi showcased an updated version of iOS 9 that included a few new features designed specifically with tablet users in mind.

Let’s start with QuickType, an enhancement to the iPad’s onscreen keyboard that includes new shortcuts and turns into a trackpad when you place two fingers on it. The trackpad can be used to select text, move objects around, and generally combine the convenience of touch controls and the precision of a mouse.

iPads will also get access to true, onscreen multitasking, which allows two apps to run side-by-side on the screen at the same time. The new feature, which Apple calls Split View, opens two resizable virtual windows on the screen. Users will be able to control each app independently, transferring information from one to the other using simple gestures, and quickly change the program running inside each panel using a brand-new app switcher. Note: While multitasking will work on most recent iPad models, Split View will be available only on the iPad Air 2.

Finally, a new picture-in-picture feature allows users to play a video from one app while using a different app. The video appears in a tiny window can be moved around, or even pushed temporarily off-screen to allow you to focus on your work while your favorite movie or game keeps playing along. The window also includes a set of simple controls that let you pause the video or close and dismiss it without leaving the current app.

The new iPad features will arrive with iOS 9 this fall, with a public beta program open to all starting in July.

MCTS Training, MCITP Trainnig

Best Apple Certification, Apple iOS9 Training at certkingdom.com



70-687 Configuring Windows 8.1

70-687 Configuring Windows 8.1
Published: 17 September 2012
Languages: English, Chinese (Simplified), French, German, Japanese, Portuguese (Brazil)
Audiences: IT professionals
Technology: Windows 8.1
Credit towards certification: MCP, MCSA, MCSE

Skills measured
This exam measures your ability to accomplish the technical tasks listed below. The percentages indicate the relative weight of each major topic area in the exam. The higher the percentage, the more questions you are likely to see on that content area in the exam.

Please note that the questions may test on, but will not be limited to, the topics described in the bulleted text.

As of January 2014, this exam includes content covering Windows 8.1.

Install and upgrade to Windows 8.1 (10–15%)

Evaluate hardware readiness and compatibility

Choose between an upgrade and a clean installation; determine which SKU to use, including Windows RT; determine requirements for particular features, including Hyper-V, Miracast display, pervasive device encryption, virtual smart cards and Secure Boot

Install Windows 8.1

Install as Windows To Go, migrate from previous versions of Windows to Windows 8.1, upgrade from Windows 7 or Windows 8 to Windows 8.1, install to VHD, install additional Windows features, configure Windows for additional languages

Migrate and configure user data

Migrate user profiles; configure folder location; configure profiles, including profile version, local, roaming and mandatory

Preparation resources

Utility spotlight: Are you compatible with Windows 8?
Install, deploy and migrate to Windows 8
Windows 8 upgrade paths

Configure hardware and applications (10–15%)

Configure devices and device drivers

Install, update, disable and roll back drivers; resolve driver issues; configure driver settings, including signed and unsigned drivers; manage driver packages

Install and configure desktop apps and Windows Store apps

Install and repair applications by using Windows Installer, configure default program settings, modify file associations, manage access to Windows Store

Control access to local hardware and applications

Configure application restrictions, including Software Restriction Policies and AppLocker; manage installation of and access to removable devices; configure Assigned Access

Configure Internet Explorer 11 and Internet Explorer for the desktop

Configure compatibility view; configure Internet Explorer 11 settings, including add-ons, downloads, security and privacy

Configure Hyper-V

Create and configure virtual machines, including integration services; create and manage checkpoints; create and configure virtual switches; create and configure virtual disks; move a virtual machine’s storage

Preparation resources

Device drivers and deployment
Managing client access to the Windows Store

Configure network connectivity (10-15%)

Configure IP settings

Configure name resolution, connect to a network, configure network locations

Configure networking settings

Connect to a wireless network, manage preferred wireless networks, configure network adapters, configure location-aware printing

Configure and maintain network security

Configure Windows Firewall, configure Windows Firewall with Advanced Security, configure connection security rules (IPsec), configure authenticated exceptions, configure network discovery

Configure remote management

Choose the appropriate remote management tools; configure remote management settings; modify settings remotely by using MMCs or Windows PowerShell; configure Remote Assistance, including Easy Connect

Preparation resources

Managing the new wireless network (IEEE 802.11) policies settings
Windows Firewall with advanced security and IPsec
Deploy remote server administration tools

Configure access to resources (10–15%)

Configure shared resources

Configure shared folder permissions, configure HomeGroup settings, configure libraries, configure shared printers, set up and configure OneDrive

Configure file and folder access

Encrypt files and folders by using Encrypting File System (EFS), configure NTFS permissions, configure disk quotas, configure file access auditing

Configure authentication and authorisation

Configure user rights, manage credentials, manage certificates, configure biometrics, configure picture passwords, configure PIN, set up and configure Microsoft account, configure virtual smart cards, configure authentication in workgroups or domains, configure User Account Control (UAC) behaviour

Preparation resources

Microsoft Virtual Academy: Windows 8 security insights: Module 7—SmartScreen filtering
Windows authentication overview

Configure remote access and mobility (15–20%)

Configure remote connections

Configure remote authentication, configure Remote Desktop settings, configure virtual private network (VPN) connections and authentication, enable VPN reconnect, configure broadband tethering

Configure mobility options

Configure offline file policies, configure power policies, configure Windows To Go, configure sync options, configure WiFi direct

Configure security for mobile devices

Configure BitLocker and BitLocker To Go, configure startup key storage

Preparation resources

Windows 8 VPN get connected
Deploy Windows To Go in your organisation
BitLocker Group Policy settings

Monitor and maintain Windows clients (10–15%)

Configure and manage updates

Configure update settings, configure Windows Update policies, manage update history, roll back updates, update Windows Store apps

Manage local storage

Manage disk volumes and file systems, manage storage spaces

Monitor system performance

Configure and analyse event logs, configure event subscriptions, configure Task Manager, monitor system resources, optimise networking performance, configure indexing options

Preparation resources

Windows Update PowerShell module
Windows Performance Monitor
Windows 8: Task Manager retuned

Configure system and data recovery options (10-15%)

Configure system recovery

Configure a recovery drive, configure system restore, perform a driver rollback, perform a refresh or recycle, configure restore points

Configure file recovery

Restore previous versions of files and folders, configure file history, recover files from OneDrive

Preparation resources

Repair and recovery
How to: Set up and use file history on Windows 8
Windows 8 Jump Start Module 6: Recovery and security

MCTS Training, MCITP Trainnig

Best Microsoft MCP Certification, Microsoft 70-687 Training at certkingdom.com

A company has 100 client computers that run Windows 8. You need to assign static IPv6
addresses to the client computers. Which Windows Powershell cmdlet should you run?

A. Set-NetTCPSetting
B. Set-NetIPInterface
C. Set-NetlPv6Protocol
D. set-NetIPAddress

Answer: D

A company has an Active Directory Domain Services (AD DS) domain. All client computers run
Windows 8. Portable client computers no longer connect to the corporate wireless network. You
need to ensure that when the corporate wireless network is available, the computers always
connect to it automatically. Which two actions would achieve the goal? (Each correct answer
presents a complete solution. Choose two.)

A. Create a Group Policy object (GPO) to configure a wireless network policy. Link the GPO to
the organizational unit that contains the computers.
B. Configure the corporate wireless network as an unmetered network.
C. Configure the corporate wireless network as a preferred network.
D. Manually connect to the corporate wireless network and select the option to connect
automatically to that network.

Answer: CD

A company has client computers that run Windows 8. The corporate network is configured for
IPv4 and IPv6. You need to disable Media Sensing for IPv6 on the client computers without
affecting IPv4 communications. What should you do on each client computer?

A. Run the Disable-NetAdapterBinding Windows PowerShell cmdlet.
B. Run the Disable-NetAdapter Windows PowerShell cmdlet.
C. Run the Set-NetlPv6Protocol Windows PowerShell cmdlet.
D. Run the Set-NetlPv4Protocol Windows PowerShell cmdlet.

Answer: C

A company has an Active Directory Domain Services (AD DS) domain. All client computers run
Windows 8. Two computers named COMPUTER1 and COMPUTER2 are connected to one network
switch and joined to the domain. Windows Firewall is turned off on both computers. You are
planning a remote management solution. You have the following requirements:
* Ensure that COMPUTER2 can run remote commands on COMPUTER1.
* Test the solution by successfully running a command from COMPUTER2 that executes on
You need to select the commands to run on COMPUTER1 and COMPUTER2 to meet the remote
management requirements. Which commands should you run?
To answer, drag the appropriate command or commands to the correct location or locations in
the answer area. Commands may be used once, more than once, or not at all. You may need to
drag the split bar between panes or scroll to view content.
Select and Place:


A company has 100 client computers that run Windows 8. The client computers are members of a
workgroup. A custom application requires a Windows Firewall exception on each client computer.
You need to configure the exception on the client computers without affecting existing firewall
settings. Which Windows PowerShell cmdlet should you run on each client computer?

A. New-NetFirewallRule
B. Set-NetFirewallSetting
C. Set-NetFirewallRule
D. Set-NetFirewallProfile
E. New-NetIPSecMainModeRule

Answer: A

Exam 70-459 Transition Your MCITP: Database Administrator 2008 or MCITP: Database Developer 2008 to MCSE: Data Platform

Exam 70-459 Transition Your MCITP: Database Administrator 2008 or MCITP: Database Developer 2008 to MCSE: Data Platform

Published: 11 June 2012
Languages: English
Audiences: IT professionals
Technology: Microsoft SQL Server 2014
Credit towards certification: MCP, MCSA, MCSE

Skills measured
This exam measures your ability to accomplish the technical tasks listed below.

Starting 15 May 2014, the questions on this exam include content covering SQL Server 2014.

Please note that the questions may test on, but will not be limited to, the topics described in the bulleted text.

Implement database objects

Create and alter tables
Develop an optimal strategy for using temporary objects (table variables and temporary tables); manage a table without using triggers; data version control and management; create tables without using the built-in tools; understand the difference between @Table and #table; create calculated columns; implement partitioned tables, schemas and functions; implement column collation; implement online transaction processing (OLTP)

Design, implement and troubleshoot security
Grant, deny, revoke; connection issues; execute as; certificates; loginless user; database roles and permissions; implement contained users; implement cross db ownership chaining; implement schema security; implement server roles; review effective permissions; troubleshoot and repair orphaned users

Create and modify constraints (complex statements)
Create constraints on tables; define constraints, modify constraints according to performance implications, implement cascading deletes, configure constraints for bulk inserts

Preparation resources
UNIQUE constraints and CHECK constraints

Implement programming objects

Design and implement stored procedures

Create stored procedures and other programmatic objects; implement different types of stored procedure results; create stored procedures for data access layer; analyse and rewrite procedures and processes; program stored procedures by using T-SQL and CLR; implement parameters, including table-valued parameter, input and output; implement encryption; implement error handling, including TRY…CATCH; configure appropriate connection settings, design appropriate query paging, including OFFSET and FETCH

Design T-SQL table-valued and scalar functions

Modify scripts that use cursors and loops into a SET-based operation; design deterministic and non-deterministic functions

Create and alter views

Set up and configure partitioned tables and partitioned views; create indexed views

Preparation resources
Create a stored procedure

Design database objects

Design tables
Apply data design patterns; develop appropriately normalised and de-normalised SQL tables; design transactions; design views; implement GUID as a clustered index appropriately; design temp tables appropriately, including # versus @; implement set-based logic; design an encryption strategy; design table partitioning; design a BLOB storage strategy, including filestream and filetable; design tables for In-Memory OLTP

Create and alter indexes
Create indexes and data structures; create filtered indexes; create an indexing strategy, including column store, semantic indexes and INCLUDE; design indexes and statistics; assess which indexes on a table are likely to be used given different search arguments (SARG); create indexes that contain included columns; create spatial indexes

Design data integrity
Design table data integrity policy, including checks, private key, foreign key, uniqueness, XML schema and nullability; select a primary key

Preparation resources

Optimise and troubleshoot queries

Optimise and tune queries
Tune a poorly performing query, including avoiding unnecessary data-type conversions; identify long-running queries; review and optimise code; analyse execution plans to optimise queries; tune queries using execution plans and Microsoft Database Tuning Advisor (DTA); optimise queries using pivots and utilising common table expressions (CTEs); design the database layout to optimise queries; implement query hints; tune query workloads; implement recursive CTE; implement full text and semantic search; analyse execution plans; implement plan guides

Troubleshoot and resolve performance problems
Interpret performance monitor data; integrate performance monitor data with SQL Traces; design an appropriate recovery model; optimise data files; identify and fix transactional replication problems; detect and resolve server failures; identify and troubleshoot data access problems; manage tempdb contention and autogrowth; implement Resource Governor; monitor and resolve In-Memory OLTP issues, including merge and rubbish collection

Collect performance and system information
Monitor performance using Dynamic Management Views; collect output from the Database Engine Tuning Advisor; design Extended Events Sessions; review and interpret Extended Event logs; optimise Extended Event session settings; use Activity Monitor to minimise server impact and determine IO bottlenecks; monitor In-Memory OLTP resources

Preparation resources
Database Engine Tuning Advisor
Manage the size of the transaction log file
SQL Server Profiler

Design database structure

Design for business requirements
Translate business needs to data structures; identify which SQL Server components to use to support business requirements; design a normalisation area; de-normalise by using SQL Server features, including materialisation using indexed views, distributed partitioned views, filtered and non-key column indexes and snapshots

Design physical database and object placement
Design a physical database, including file placement, FILESTREAM, FILETABLE, file groups and RAID; configure system database settings

Design SQL Server instances
Identify hardware for new instances; design CPU affinity; design clustered instances, including Microsoft Distributed Transaction Control (MSDTC); define instance memory allocation; design installation strategies, including sysprep, slipstream and SMB file server; define cross db ownership chaining

Preparation resources
Create indexed views
FileTables (SQL Server)
Failover clustering and AlwaysOn Availability Groups (SQL Server)

Design databases and database objects

Design a database model
Design a logical schema; design data access and data layer architecture; design a database schema; design security architecture; design a cross-server instance database model, including linked servers, security, providers, distributed transactions, distributed partitioned views and Service Broker

Design tables
Design tables appropriately, including physical tables, temp tables, temp table variables, common table expressions, column store indexes and user-defined table types; FILESTREAM, FILETABLE and In-Memory OLTP; design views and table values functions; design a compression strategy, including row and page; select an appropriate data type; design computed columns

Design T-SQL stored procedures
Create stored procedures; design a data-access strategy using stored procedures; design appropriate stored procedure parameters, including input, output and Table Valued; design error handling; design an In-Memory OLTP strategy for stored procedures

Preparation resources
Collation and Unicode support
Row compression implementation
Stored procedures (database engine)

Design database security
Design an application strategy to support security

Design security, including security roles, signed stored procedures, encryption, contained logins, EXECUTE AS and credentials; implement schemas and schema security; design security maintenance, including SQL logins, integrated authentication, permissions and mirroring

Design instance-level security configurations
Implement separation of duties using different login roles; choose authentication type, including logon triggers, regulatory requirements and certificates; implement data encryption, including master key and configuration; implement DDL triggers; define a secure service account

Preparation resources
Tutorial: Signing stored procedures with a certificate
Logon triggers
DDL triggers

Design a troubleshooting and optimisation solution
Troubleshoot and resolve concurrency issues

Examine deadlocking issues using the SQL server logs and trace flags; design a reporting database infrastructure, including replicated databases; monitor concurrency, including Dynamic Management Views (DMV); diagnose blocking, including live locking and deadlocking; diagnose waits; use Extended Events; implement query hints to increase concurrency

Design a monitoring solution at the instance level
Design auditing strategies, including Extended Events, Event traces, SQL Audit, Profiler-scheduled or event-based maintenance, Performance Monitor and DMV usage; set up file and table growth monitoring; collect performance indicators and counters; create jobs to monitor server health; audit using Windows

Preparation resources
KILL (Transact-SQL)
SQL Server audit (database engine)

MCTS Training, MCITP Trainnig

Best Microsoft MCTS Certification, Microsoft 70-459 Training at certkingdom.com


You need to configure the ProcessUpdateProc stored procedure to stop running in the event of a
failure of one of the UPDATE statements.
How should you modify the ProcessUpdateProc stored procedure?

A. By configuring the SET NOCOUNT to on.
B. By configuring the SET NOEXEC option to on.
C. By configuring the XACT_ABORT option to on.
D. By configuring the XACT_ABORT option to off.

Answer: C


You need to design a solution to that enables the recovery of the DailyReportsTemp database in
less than one hour in the event of a storage hardware failure. Your solution must minimize costs.
What should you recommend?

A. SQL Server Failover Clustering
B. Peer-to-peer replication
C. Differential backups
D. Log shipping
E. Database snapshots

Answer: D


You need to recommend a solution to meet the recovery requirements for the Manufacturing
database. Your solution must minimize costs.
What should you recommend?

A. Database snapshots
B. Transaction log backups.
C. Differential backups
D. SQL Server Failover Clustering
E. Peer-to-peer replication

Answer: A


You need to address the backup issues of the Sales database.
How can you reduce the time it takes to back up the Sales database?

A. By configuring table partitioning.
B. By configuring filegroups.
C. By configure the Resource Governer
D. By configuring Copy-Only backups.

Answer: B


You need to provide a group of users from the IT and Manufacturing departments the minimum
administrative rights to view database information and server state for the Manufacturing database
on MainDB1.
What should you do?

A. You should configure a Database Role.
B. You should configure a Server Role.
C. You should configure a Shared SQL Server Login.
D. You should configure a Local Security Group.

Answer: B



Security Is a Prisoner of the Network

Cybersecurity professionals must gain experience and get comfortable with virtual network security

I have a very distinct memory about a conversation I had with a colleague in the mid-to-late 1990s about how NetWare worked. I told him that file and print services resided “in the network” but he couldn’t get his arms around this concept. He continually pushed back by saying things like, “well the printers and file servers have to be plugged into the network so isn’t NetWare just running on these devices?”

His assumption was somewhat accurate since NetWare did control physical file servers and printers. What he didn’t get however was that NetWare made physical network devices subservient to a global and virtual file and print services. Before NetWare (and similar technologies like Sun’s NFS), you had to have a physical connection to a device and/or control these connections on a device-by-device basis. Novell radically changed this by using software to abstract connections. This made it much easier to point users at local printers and file shares while applying central access controls for security and privacy.

Why am I strolling down memory LAN (author’s note: I am pretty proud of this pun)? Because we face a similar changing situation today with regard to network security and cloud computing. I contend that security has been a prisoner of the network over the past 20 years.

During this timeframe, large organizations deployed an army of network security devices to filter or at least inspect IP packets for security purposes. As organizations added more servers and more network traffic, they were forced to add more network security devices. This required a series of unnatural acts like moving traffic to and fro so it could pass by various security checkpoints. Security and network engineers also created security zones with physical and virtual network segmentation, and employed teams of people to create and manage ACLs, firewalls, WAFs, etc.

Not surprisingly, network security has become incredibly complex, cumbersome, and fragile as a result of layers upon layers of network imprisonment. It now takes a heroic effort from cybersecurity and network operations team to keep up with these challenges.

Fast forward to 2015 and there is a radical change occurring. IT initiatives like server virtualization, cloud computing, NFV, and SDN are game changers poised to break the tight coupling between cybersecurity and the network.

Now this breakup is still in its early stages and like the song says: Breaking up is hard to do. For example, ESG research reveals that 60% of organizations say they are still learning how to apply network security policies (and policy enforcement) to public/private cloud infrastructure. Furthermore, 60% of organizations say that their network security operations and processes lack the right level of automation and orchestration necessary for public/private cloud computing infrastructure (note: I am an ESG employee).

As painful as this separation is today, CISOs and network engineers must understand that there may be a network security rainbow on the horizon. Just as NetWare turned file and print into a productive and operationally-efficient virtual network service, there are a number of technology trends and innovations that could enable CISOs to virtualize and distribute network security services across the entire network. For example:

Foundational technologies like SDN, NFV, Cisco ACI and VMware NSX.
Cloud security monitoring tools from HyTrust, ThreatConnect, and SkyHigh Networks as well as cloud connectors for ArcSight, QRadar, RSA, and Splunk.

NetWare-like network security services software from CloudPassage, Illumio, and vArmour.

Network security orchestration tools from firms like RedSeal and Tufin.
Virtual editions of leading physical network security products from vendors like Check Point, Fortinet, Juniper, and Palo Alto Networks.

A few years ago, VMware declared that organizations could actually improve their cybersecurity positions by embracing server virtualization. While this seemed like blasphemy at the time, VMware was absolutely right. And the addition of the technologies and trends I mention above makes this statement even more possible. In order to get there however, CIOs, CISOs, and networking professionals have to think differently. Rather than try to emulate physical network security in the cloud, cybersecurity and networking staff must embrace virtual network security services, learn how to use them, and understand how to use them to improve security efficacy and operational efficiency.

Back in the 1990s, NetWare transformed file and print services and introduced an army of skilled IT professionals with CNE certifications. Over the next few years, we will see a similar revolution as security sheds its physical network shackles and assumes a role of virtual network services.

MCTS Training, MCITP Trainnig

Best Microsoft MCTS Certification, Microsoft MCITP Training at certkingdom.com