Archive for the ‘Tech’ Category

20-plus eye-popping Black Friday 2014 tech deals

iPhone 6, iPad Air, Samsung Galaxy gear and big cheap TVs among the hottest electronic deals for Black Friday and Cyber Monday in 2014.

Black Friday is upon us
Word is that more retailers will relent to public pressure – I mean do the right thing for their employees – and close on Thanksgiving Day this year. But that won’t prevent them from going all out online, where much is automated and the workers are less prominent. Here are some of the best deals on network and technology offerings for Black Friday, Cyber Monday and in between. (Compare with last year’s deals)

Black Friday is upon us
Word is that more retailers will relent to public pressure – I mean do the right thing for their employees – and close on Thanksgiving Day this year. But that won’t prevent them from going all out online, where much is automated and the workers are less prominent. Here are some of the best deals on network and technology offerings for Black Friday, Cyber Monday and in between. (Compare with last year’s deals)

Dell: Inspiron 15-inch laptop
Powered by an Intel Celeron processor and running Windows 8.1, this system boasts 4GB of RAM and a 500GB hard drive. Dell’s special pricing for those getting through online beginning at 12 a.m. on Friday, Nov. 28, is $190, a $110 discount off what Dells calls the “market price” (though Dell appears to regularly sell the laptop for $250.

Target: Apple TV
Like other retailers, Target has a number of deals on Apple products. Among them: $11 off an Apple TV device, which you can get for $89 on Black Friday.

Target: iPhones, iPads and gift cards
Apple gives retailers little leeway in terms of discounting its products, so Target and others often resort to selling the Apple products for the regular price, but bundling the with gift cards. Target is offering a $100 Target gift card with an iPad Air 16GB WiFi tablet ($400), iPad mini 3 16GB WiFi tablet ($400) or iPad mini 2 16GB WiFi Tablet ($300).

Best Buy: Samsung Gear Fit Fitness watch with heart rate monitor
Best Buy is slashing the price on this gadget, which comes in black, from $150 to $100. Count your steps taken and calories burned in style, with this device, which syncs up with various Android phones. Best Buy’s online sales will run Thursday/Friday, with stores opening at 5 pm on Thanksgiving Day where allowed, and again at 8 am on Friday.

Best Buy: Surface Pro 3
The retailer is cutting $50 to $150 off the price of Microsoft Surface Pro 3 tablets with 128GB of storage or more (they start at $1,000 before the discount). Note that this does not include the keyboard for the flexible 12-inch touchscreen device.

Best Buy: Panasonic 50-inch LED TV doorbuster
This 33-pound Panasonic TV, which serves up a 1080p and 60Hz HDTV picture, usually costs $550. The pre-Black Friday price is down to $500, but will go for just $200 in this in-store-only deal on Thanksgiving/Black Friday.

Microsoft: Tablets and games
The Microsoft Store lists a slew of deals, some for which you need to wait until Thanksgiving or Black Friday, and others that you can snag ahead of time. Among the early bird specials is a Lumia 635 phone for 1 cent with a new service contract. The phone has a 4.5-inch screen, runs Windows 8.1 and has 8GB of storage. Microsoft also has lots of Xbox and game deals available in its store this holiday shopping season.

Staples: Asus x205-TA Laptop computer
This bare-bones Windows 8.1 machine, with a 32GB hard drive and 2GB of RAM, normally goes for $250. It’s already been marked down to $200, and for Black Friday, Staples is cutting that price in half. The laptop, featuring 802.11abgn WiFi, is powered by an Intel Atom processor and has an 11.6-inch screen.

Staples: JLab Pro-7 Tablet
OK, can’t say we know this brand either, but for $40, it could be worth a shot if you just want to play around with a small Android tablet. The device usually sells for $70. It only packs 8GB or storage, but has a MicroSD slot for adding up to 32GB more.

RadioShack: RC Surveyor Drone
Satisfy your drone curiosity and freak out your neighbors with this 2.4GHz quadcopter that’s been marked down from $70 to $35 for Black Friday. This lightweight flyer comes with a built-in 1080×720 camera, can be controlled up to 65 feet away and can even do stunts. RadioShack will be opening on Thanksgiving morning, again late in the afternoon, and then at 6 am on Black Friday.

Costco: HP Envy 15.6-inch TouchSmart Laptop
This computer is powered by an Intel 4th generation Core i7 processor, runs Windows 8, features Beats audio and a 1TB hard drive. Costco, which is tossing in a second-year warranty, is slashing its $800 warehouse price by $150 for Black Friday shoppers who come into the store.

Office Depot/Officemax: Samsung Galaxy Tab 4
The price on this 10.1-inch Android tablet has been axed to $250, which is $100 off the usual price. Yes, this isn’t Samsung’s latest model, but it only came out in April. The device features a 1.2GHz quad core processor, and 16GB of storage, expandable to 64GB.

Meijer: Samsung Galaxy Tablet Lite
This 7-inch, 8GB tablet will run you $99 on Black Friday, which is $40 off the regular price. Plus, you’ll get a $20 coupon for your next shopping trip. The touchscreen tablet boasts a 1.2Ghz dual-core processor.

Sears: 55-inch Samsung LED TV
This 1080p Smart HD-TV, usually priced at $1,400, is available for $800 starting on Thanksgiving night (though note that Sears already lists TV for $1,000, not $1,400). It comes integrated with services such as Netflix and Pandora.

Belk: iLive Bluetooth Soundbar
This 32-inch black bar will enable you to wireless boom your tunes for $70 — $30 off the usual price. Works with iOS gadgets and most Android and BlackBerry devices. Can also sync up with your TV, game systems and more. This is an online deal.

Shopko: Kindle Fire HD tablet
This lightweight 7-inch WiFi tablet (with 8GB of storage, 1GB of which is internal memory) will have its price shaved by $20, so you pay $80. The retailer’s Black Friday deals start at 6 pm on Thanksgiving Day, though look for additional doorbusters as early as Wednesday.

Various retailers: Record Store Day specials
Got an MP3 hater in your life who prefers to spin big ol’ discs? Record Store Day, an annual April event designed to accommodate record lovers, expands for a Black Friday event that will feature limited-edition offerings from a variety of singers and bands, including The Afghan Whigs, The Beatles and Chvrches.

Walmart: iPhone 6
The monster retailer, which has said it will match Amazon prices in all its stores to kick off the holiday shopping season, has a pretty fine deal on the iPhone 6, which will cost $179 for a 16GB model with a two-year contract (typically $199). What’s more, you’ll get a $75 Walmart gift card, plus another $200 gift card for a smartphone trade-in. (Some industry watchers have warned about whether the 16GB size will only lead to frustration for iPhone 6 users…)

Walmart: 65-inch Vizio LED TV
This behemoth set will go for $648 this Black Friday, a savings of $350. Walmart says a 60-incher last holiday season went at $688, so you can see where pricing for big TVs is going…

Walmart: Xbox One Assassin’s Creed Unity Bundle
This package, including the Microsoft game console, the new edition of Assassin’s Creed and Version IV: Black Flag, will be available for $329 starting on Thanksgiving Day at Walmart. That’s down from the usual price of $400, though actually that price has already been marked down to $349.

Toys R Us: 5th generation iPod touch
You don’t hear about these much anymore, but it makes sense that Toys R Us would sell this Apple mainstay. The 16GB model is selling on Black Friday for $150 — $50 off the usual price. It comes in many pretty colors, too!

Kohl’s: Innovative Technology portable power bank
Kohl’s isn’t the first retailer we think of for tech products, but we did come across this possible stocking stuff: a Justin 2200mAh Power Stick Portable Power Bank for $10, which is $15 off the regular price. USB-pluggable, works with most smartphones to keep you from running out of juice when not able to plug in.

Hhgregg: LG 50-inch smart LED TV
The electronics retailer has a ton of TVs on sale, with many prices slashed by $100 or more. One example: The LG 1080p 120Hz LED WebOS Smart HDTV, which will go for $658, down from $800. You get a free 6-month Spotify subscription to boot.


Comptia A+ Training, Comptia A+ certification

Best comptia A+ Training, Comptia A+ Certification at Certkingdom.com

How automation could take your skills — and your job

A new book by Nicholas Carr should give IT managers pause about the rush to automation

Nicholas Carr’s essay IT Doesn’t Matter in the Harvard Business Review in 2003, and the later book, argued that IT is shifting to a service delivery model comparable to electric utilities. It produced debate and defensiveness among IT managers over the possibility that they were sliding to irrelevancy. It’s a debate that has yet to be settled. But what is clear is that Carr has a talent for raising timely questions, and he has done so again in his latest work The Glass Cage, Automation and Us (W.W. Norton & Co.)

This new book may make IT managers, once again, uncomfortable.

The Glass Cage examines the possibility that businesses are moving too quickly to automate white collar jobs, sophisticated tasks and mental work, and are increasingly reliant on automated decision-making and predictive analytics. It warns of the potential de-skilling of the workforce, including software developers, as larger shares of work processes are turned over to machines.

This book is not a defense of Luddites. It’s a well-anchored examination of the consequences and impact about deploying systems designed to replace us. Carr’s concerns are illustrated and found in, for instance, the Federal Aviation Administration’s warning to airlines about automation, and how electronic medical records may actually be raising costs and hurting healthcare.

In an interview, Carr talked about some of the major themes in his book. What follows are edited excerpts:
Glass Cage cover

The book discusses how automation is leading to a decay of skills and new kinds of risks. It cites an erosion of skills among aircraft pilots, financial professionals and health professionals who, for instance, examine images with automation. But automation has long replaced certain skills. What is different today about the automation of knowledge or mental work that makes you concerned? I think it comes to the scope of what can be automated today. There has always been, from the first time human beings developed tools, and certainly through the industrial revolution, trade-offs between skill loss and skill gain through tools. But until the development of software that can do analysis, make judgments, sense the environment, we’ve never had tools, machines that can take over professional work in the way that we’re seeing today. That doesn’t mean take it over necessarily entirely, but become the means through which professionals do their jobs, do analytical work, make decisions, and so forth. It’s a matter of the scope of automation being so much broader today and growing ever more broad with each kind of passing year.

Where do you think we stand right now in terms of developing this capability? There are some recent breakthroughs in computer technology that have greatly expanded the reach of automation. We see it on the one hand with the automation of complex psychomotor skills. A good example is the self-driving car that Google, and now other car makers, are manufacturing. We’re certainly not to the point where you can send a fully autonomous vehicle out into real-world traffic without a backup driver. But it’s clear that we’re now at the point where we can begin sending robots out into the world to act autonomously in a way that was just impossible even 10 years ago. We’re also seeing, with new machine-learning algorithms and predictive algorithms, the ability to analyze, assess information, collect that, interpret it automatically and pump out predictions, decisions and judgments. Really, in the last five years or so we, have opened up a new era in automation, and you have to assume the capabilities in those areas are going to continue to grow, and grow pretty rapidly.

What is the worry here? If I can get into my self-driving car in the morning, I can sit back and work on other things. There are two worries. One is practical and the other is philosophical. The actuality of what’s facing us in the foreseeable future is not complete automation, it’s not getting into your car and simply allowing the computer to take over. It’s not getting into a plane with no pilots. What we’re looking at is a shared responsibility between human experts and computers. So, yes, maybe at some point in the future we will have completely autonomous vehicles able to handle traffic in cities. We’re still a long way away from that. We have to figure out how to best balance the responsibilities between the human expert or professional and computer. I think we’re going down the wrong path right now. We’re too quick to hand over too much responsibility to the computer and what that ends up doing is leaving the expert or professional in a kind of a passive role: looking at monitors, following templates, entering data. The problem, and we see it with pilots and doctors, is when the computer fails, when either the technology breaks down, or the computer comes up against some situation that it hasn’t been programmed to handle, then the human being has to jump back in take control, and too often we have allowed the human expert skills to get rusty and their situational awareness to fade away and so they make mistakes. At the practical level, we can be smarter and wiser about how we go about automating and make sure that we keep the human engaged.

Then we have the philosophical side, what are human beings for? What gives meaning to our lives and fulfills us? And it turns out that it is usually doing hard work in the real world, grappling with hard challenges, overcoming them, expanding our talents, engaging with difficult situations. Unfortunately, that is the kind of effort that software programmers, for good reasons of their own, seek to alleviate today. There is a kind of philosophical tension or even existential tension between our desire to offload hard challenges onto computers, and that fact that as human beings, we gain fulfilment and satisfaction and meaning through struggling with hard challenges.

Let’s talk about software developers. In the book, you write that the software profession’s push to “to ease the strain of thinking is taking a toll on their own skills.” If the software development tools are becoming more capable, are software developers becoming less capable? I think in many cases they are. Not in all cases. We see concerns — this is the kind of tricky balancing act that we always have to engage in when we automate — and the question is: Is the automation pushing people up to higher level of skills or is it turning them into machine operators or computer operators — people who end up de-skilled by the process and have less interesting work. I certainly think we see it in software programming itself. If you can look to integrated development environments, other automated tools, to automate tasks that you have already mastered, and that have thus become routine to you that can free up your time, [that] frees up your mental energy to think about harder problems. On the other hand, if we use automation to simply replace hard work, and therefore prevent you from fully mastering various levels of skills, it can actually have the opposite effect. Instead of lifting you up, it can establish a ceiling above which your mastery can’t go because you’re simply not practicing the fundamental skills that are required as kind of a baseline to jump to the next level.

What is the risk, if there is a de-skilling of software development and automation takes on too much of the task of writing code? There are very different views on this. Not everyone agrees that we are seeing a de-skilling effect in programming itself. Other people are worried that we are beginning to automate too many of the programming tasks. I don’t have enough in-depth knowledge to know to what extent de-skilling is really happening, but I think the danger is the same danger when you de-skill any expert task, any professional task, …you cut off the unique, distinctive talents that human beings bring to these challenging tasks that computers simply can’t replicate: creative thinking, conceptual thinking, critical thinking and the ability to evaluate the task as you do it, to be kind of self-critical. Often, these very, what are still very human skills, that are built on common sense, a conscious understanding of the world, intuition through experience, things that computers can’t do and probably won’t be able to do for long time, it’s the loss of those unique human skills, I think, [that] gets in the way of progress.

What is the antidote to these pitfalls? In some places, there may not be an antidote coming from the business world itself, because there is a conflict in many cases between the desire to maximize efficiency through automation and the desire to make sure that human skills, human talents, continue to be exercised, practiced and expanded. But I do think we’re seeing at least some signs that a narrow focus on automation to gain immediate efficiency benefits may not always serve a company well in the long term. Earlier this year, Toyota Motor Co., announced that it had decided to start replacing some of its robots in it Japanese factories with human beings, with crafts people. Even though it has been out front, a kind of a pioneer of automation, and robotics and manufacturing, it has suffered some quality problems, with lots of recalls. For Toyota, quality problems aren’t just bad for business, they are bad for its culture, which is built on a sense of pride in the quality that it historically has been able to maintain. Simply focusing on efficiency, and automating everything, can get in the way of quality in the long-term because you don’t have the distinctive perspective of the human craft worker. It went too far, too quickly, and lost something important.

Gartner recently came out with a prediction that in approximately 10 years about one third of all the jobs that exist today will be replaced by some form of automation. That could be an over-the-top prediction or not. But when you think about the job market going forward, what kind of impact do you see automation having? I think that prediction is probably over aggressive. It’s very easy to come up with these scenarios that show massive job losses. I think what we’re facing is probably a more modest, but still ongoing destruction or loss of white collar professional jobs as computers become more capable of undertaking analyses and making judgments. A very good example is in the legal field, where you have seen, and very, very quickly, language processing software take over the work of evidence discovery. You used to have lots of bright people reading through various documents to find evidence and to figure out relationships among people, and now computers can basically do all that work, so lots of paralegals, lots of junior lawyers, lose their jobs because computers can do them. I think we will continue to see that kind of replacement of professional labor with analytical software. The job market is very complex, so it’s easy to become alarmist, but I do think the big challenge is probably less the total number of jobs in the economy then the distribution of those jobs. Because as soon as you are able to automate what used to be very skilled task, then you also de-skill them and, hence, you don’t have to pay the people who do them as much. We will probably see a continued pressure for the polarization of the workforce and the erosion of good quality, good paying middle class jobs.

What do you want people to take away from this work? I think we’re naturally very enthusiastic about technological advances, and particularly enthusiastic about the ways that engineers and programmers and other inventors can program inanimate machines and computers to do hard things that human beings used to do. That’s amazing, and I think we’re right to be amazed and enthusiastic about that. But I think often our enthusiasm leads us to make assumptions that aren’t in our best interest, assumptions that we should seek convenience and speed and efficiency without regard to the fact that our sense of satisfaction in life often comes from mastering hard challenges, mastering hard skills. My goal is simply to warn people.

I think we have a choice about whether we do this wisely and humanistically, or we take the road that I think we’re on right now, which is to take a misanthropic view of technological progress and just say ‘give computers everything they can possibly do and give human beings whatever is left over.’ I think that’s a recipe for diminishing the quality of life and ultimately short-circuiting progress.


 

Cisco CCNA Training, Cisco CCNA Certification

Best Microsoft MCTS Certification, Microsoft MCITP Training at certkingdom.com

Internet of Things roundtable: Experts discuss what to look for in IoT platforms

Networking is at the heart of every Internet of Things deployment, connecting sensors and other “Things” to the apps that interpret the data or take action.

But these are still early days. Assembling an IoT network from commercial off-the-shelf components is still, let’s just say, a work in progress. This will change over time, but for now the technical immaturity is being addressed by System Integrators building custom code to connect disparate parts and by a new class of network meta-product known as the IoT Platform.

IoT Platform products are still in their infancy, but there are already more than 20 on the market today. Approaches vary, so when making a build or buy decision, consider these critical areas of IoT Platform tech: security, sensor compatibility, analytics compatibility, APIs and standards.

iot platform diagram Iot-Inc.

To see where we stand on developments in these areas, I emailed experts from seven IoT Platform companies, big and small, asking for input: Roberto De La Mora, Sr. Director at Cisco, Steve Jennis, SVP at PrismTech, Bryan Kester, CEO at SeeControl, Lothar Schubert, Platform Marketing leader, GE Software, Niall Murphy, Founder & CEO at EVRYTHNG, Alan Tait, Technical Manager at Stream Technologies and Raj Vaswani, CTO and Co-Founder, Silver Spring Networks. Here’s what they had to say:

* Security
De La Mora: Security technologies and solutions that are omnipresent in IT networks can be adapted (carefully) to serve Operational Technology in IoT environments. But security is not about adding firewalls or IPS/IDS systems here and there. Cyber Security for IoT should follow a model applied at every layer of the architecture, and be combined with physical security to add intelligence to the operation via data correlation and analytics.

Jennis: Without a standards-based security framework it is very difficult to create communication channels that are both secure and interoperable. An interoperable security solution is very important in order to prevent vendor lock-in and to enable the system to be extended if required.

Kester: Sophisticated customers are encrypting traffic between the sensor board and the cloud. However most deployments are using private VPNs which don’t require a lot precious CPU or RAM from the remote device/system.

Murphy: Crypto-secure digital identities for physical things enable authenticated identities online by applying token-based security methods through Web standards to manage application access to these digital identities.

Vaswani: Embed security at each layer of the network, including sophisticated authentication and authorization techniques for all intelligent endpoints, require digital signatures and private keys to prevent any unauthorized access or activity on the system, and end-to-end encryption for all communications across the network. Incorporating physical tamper detection and resistance technologies further reduces the risk of unauthorized access and monitoring.

* Sensor Compatibility
Jennis: The following Platform considerations should be taken into account:

· Memory footprint – how much memory does the Platform require to function? Some simple sensors have only 128KB of memory to work with.

· Operating system support – does the Platform require a full POSIX-like OS or can it accept something simpler?

· Network stack support, e.g. IPv4, IPv6, 6LoWPAN, other – simple sensors used in Low Power Wireless Area Networks (LoWPAN) may require a cut down IP stack.

· Programming language support – a Platform may provide APIs for only specific programming languages (e.g. C or C++).

· Java dependence – does the Platform require a JVM to function, limiting sensor choices?

Murphy: The most important consideration is recognizing the risks inherent in vertically integrated solution architectures. By definition, the Internet of Things is heterogeneous in the types of things it is connecting. A horizontal architecture, to manage the information from and about the things they are connecting, can abstract the transport layer from the application layer. This allows applications to be developed independently of specific sensor devices, and sensor devices to be changed and network connectivity methods changed without breaking the application dependencies.

Schubert: A Software-Defined Machine (SDM) decouples software from the underlying hardware, making machines directly programmable through machine apps and allows connecting with virtually “any” machine and edge device, including retrofitting machines and connections to legacy systems.

* Analytics Compatibility

De La Mora: Support for structured and non-structured data, ease of integration with existing operation, automation and control systems, and the ability to operate in a distributed computing environment are all important factors for analytic compatibility.

Kester: To do advanced long-term business intelligence, machine learning or Hadoop-type of parallel processing, your Platform choice should have a well-documented and Web accessible API to interface with your analytic product of choice. It should also be easy for any IT employee, or even savvy business analysts, to use without training.

Murphy: The network platform has to enable multiple disparate audiences within a company access to benefit from data collection and perform meaningful analysis. Analytics is often thought of in a reporting sense only, but increasingly analytics is being applied in conjunction with machine learning algorithms and rules logic to drive applications and actuate devices.

Tait: You need to be sure the information you are collecting is stored well (backed up, secure, etc.) and that you have the ability to export your data and you maintain ownership.

Schubert: The tremendous data growth in industrial IoT demands massively scalable, low-cost infrastructure, such as that based on Apache Hadoop v2 and COTS (commercial off-the shelf) hardware. It has to support the various security, compliance and data privacy mandates. Predictive Analytics is how value is delivered to customers. It provides timely foresight into asset and operations, and provides actionable recommendations (when paired with rule engines). Perhaps most important, analytics need to be integrated into the operational processes, rather than be a stand-alone IT solution.

* APIs
De La Mora: RESTful API’s are becoming standard. The abstraction capabilities they provide, along with the architectural model based on the Web, are key. SDK’s that provide API’s that are not compatible with the W3C TAG group are a nonstarter for applications that should be in the end, connected to the Internet.

Jennis: First and foremost, APIs should be clean, type-safe and idiomatic. In addition, APIs should favor non-blocking/asynchronous interaction models to make it easier to build responsive systems. Where possible APIs should be standardized to ease component integration and prevent lock-in.

Murphy: APIs should use Web standards and blueprints (e.g. REST and no WSDL/SOAP), and state-of-art Web security systems. They should also offer ways of extracting the data, not just feeding it in.

Tait: Keep it simple, truly good APIs are clear, concise and have a purpose. They should also do the common things easily.

Schubert: Service-oriented architectures (SOA) and related application development paradigms rely on APIs for integration of services, processes and systems. APIs must be open, accessible and upgrade-compatible.

* Standards
De La Mora: We are calling this the Internet of Things because it will be part of the next generation of the Internet, so the only key standard protocol, that I see in the future, is IPv6.

Kester: Any Platform that is in communication with devices should support the major communication protocols in use today, which are UDP, MQTT, XMPP, CoAP, Modbus/TCP and HTTP.

Murphy: RESTful application programming interfaces, JSON and similar Web-centric formats for data exchange should be used. The Platform that an enterprise uses to manage its physical products and assets as digital assets, needs to be able to integrate smoothly with both the enterprise’s other systems and third party applications. Integration means both the technical protocols of system-to-system interaction (e.g. REST, OAuth) but also critically, the semantics of the information itself.

Vaswani: The use of universal standards such as IP ensures that products can be easily mixed and matched from different vendors to ensure full interoperability and to deliver on other applications supported by an even broader ecosystem of hardware and software players.


MCTS Training, MCITP Trainnig

Best Microsoft MCTS Certification, Microsoft MCITP Training at certkingdom.com

 

 

Workers use their own devices at work, without boss’s knowledge

Line between work and play is getting more blurred, Gartner survey indicates

Many workers use their personally owned smartphones and other computers for job tasks, but a new survey shows a big percentage are doing so without their employer’s knowledge.

Market research firm Gartner surveyed 4,300 U.S. consumers in June who work at large companies (with more than 1,000 employees) and found 40% used personally owned smartphones, tablets, laptops or desktops as a primary or supplemental business device.

That 40% might not be unusual, but more surprisingly, Gartner found that 45% of workers not required to use a personal device for work were doing so without their employer’s knowledge.

“Almost half [are using their device] without their employer’s awareness,” said Gartner analyst Amanda Sabia in an interview.

“Are those without employer’s awareness violating a rule? That would depend on the employer,” Sabia added. “The point is that some CIOs are underestimating [the number of] employees using their devices and should be prepared for this.”

The Gartner survey found the most popular personally owned device used for work was a desktop computer, at 42%, closely followed by a smartphone, at 40%, a laptop, at 36%, and a tablet, at 26%.

“The lines between work and play are becoming more and more blurred as employees choose to use their own device for work purposes whether sanctioned by an employer or not,” Sabia said. “Devices once bought for personal use are increasingly used for work.”

Technology manufacturers and wireless service providers could do more to respond to the bring-your-own-device trend, Sabia said. The survey found that the primary use of a smartphone, after making calls and texting, was to get maps and directions.

“Smartphone vendors should focus on ensuring ease of integration of a smartphone with in-car sound and media systems for hands-free and real-time operation of these [mapping and directions] functions,” Sabia added.

The survey asked a wide range of questions beyond BYOD concerns. Another finding was that 32% of respondents plan to buy a smartphone in the next 12 months, while 23% want to buy a laptop or notebook, 20% plan to buy a tablet and 14% a desktop PC.

Also, about 80% of respondents said they had downloaded a mobile app. Of that number, three-fourths of the apps were free, and one-fourth were paid.

Nick Ingelbrect, a Gartner analyst, noted that the app industry has struggled to make money on its products, but the survey results should provide encouragement. The app market is maturing, and consumers are more discerning, but will pay for apps that they find valuable, he said.


 

MCTS Training, MCITP Trainnig

Best Microsoft MCTS Certification, Microsoft MCITP Training at certkingdom.com

9 employee insiders who breached security

 

These disgruntled employees show what can happen when an employer wrongs them.

Security admins used to have to worry about keeping the bad guys out of the network, but there have been many documented cases where the devil you know is sitting right next to you. A review of recent FBI cyber investigations revealed victim businesses incur significant costs ranging from $5,000 to $3 million due to cyber incidents involving disgruntled or former employees, according to AlgoSec. Here are just a few over the years of insiders trying to take down their employer’s network.

Terry Childs, the former network administrator for the City of San Francisco, held the city’s systems hostage for a time. He refused to surrender passwords because he felt his supervisors were incompetent. Childs was convicted of violating California’s computer crime laws in April 2010.

In June 2012, Ricky Joe Mitchell of Charleston, W.Va., a former network engineer for oil and gas company EnerVest, was sentenced to prison for sabotaging the company’s systems. He found out he was going to be fired and decided to reset the company’s servers to their original factory settings.

It was discovered in 2007 that database administrator William Sullivan had stolen 3.2 million customer records including credit card, banking and personal information from Fidelity National Information Services. Sullivan agreed to plead guilty to federal fraud charges and was sentenced to four years and nine months in prison and ordered to pay a $3.2 million fine.

Flowers Hospital had an insider data breach that occurred from June 2013 to February 2014 when one of its employees stole forms containing patient information and possibly used the stolen information to file fraudulent income tax returns.

According to Techworld.com, 34-year-old Sam Chihlung Yin created a fake VPN token in the name of a non-existent employee which he tricked Gucci IT staff into activating after he was fired in May 2010.

Army Private First Class Bradley Manning released sensitive military documents to WikiLeaks in 2009. Manning, now known as Chelsea Manning, was given a sentence of 35 years in prison.

Back in 2002, Timothy Lloyd was sentenced to three-and-a-half years in prison for planting a software time bomb after he became disgruntled with his employer Omega. The result of the software sabotage was the loss of millions of dollars to the company and the loss of 80 jobs.

Earlier this year, NRAD Medical Associates discovered that an employee radiologist had accessed and acquired protected health information from NRAD’s billing systems without authorization. The breach was estimated to be 97,000 records of patient names and addresses, dates of birth, Social Security information, health insurance, and diagnosis information.

And of course there is the most famous whistleblower of all time: Edward Snowden. Before fleeing the country, he released sensitive NSA documents that became a blowup about government surveillance.


 

MCTS Training, MCITP Trainnig

Best Microsoft MCTS Certification, Microsoft MCITP Training at certkingdom.com

8 cutting-edge technologies aimed at eliminating passwords

In the beginning was the password, and we lived with it as best we could. Now, the rise of cyber crime and the proliferation of systems and services requiring authentication have us coming up with yet another not-so-easy-to-remember phrase on a near daily basis. And is any of it making those systems and services truly secure?

One day, passwords will be a thing of the past, and a slew of technologies are being posited as possibilities for a post-password world. Some are upon us, some are on the threshold of usefulness, and some are likely little more than a wild idea, but within each of them is some hint of how we’ve barely scratched the surface of what’s possible with security and identity technology.

The smartphone

The idea: Use your smartphone to log into websites and supply credentials via NFC or SMS.

Examples: Google’s NFC-based tap-to-unlock concept employs this. Instead of typing passwords, PCs authenticate against the users phones via NFC.

The good: It should be as easy as it sounds. No interaction from the user is needed, except any PIN they might use to secure the phone itself.

The bad: Getting websites to play along is the hard part, since password-based logins have to be scrapped entirely for the system to be as secure as it can be. Existing credentialing systems (e.g., Facebook or Google login) could be used as a bridge: Log in with one of those services on your phone, then use the service itself to log into the site.

The smartphone, continued
The idea: Use your smartphone, in conjunction with third-party software, to log into websites or even your PC.

Examples: Ping Identity. When a user wants to log in somewhere, a one-time token is sent to their smartphone; all they need to do is tap or swipe the token to authenticate.

The good: Insanely simple in practice, and it can be combined with other smartphone-centric methods (a PIN, for instance) for added security.

The bad: Having enterprises adopt such schemes may be tough if they’re offered only as third-party products. Apple could offer such a service on iPhones if it cared enough about enterprise use; Microsoft might if its smartphone offerings had any traction. Any other takers?

Biometrics
The idea: Use a fingerprint or an iris scan — or even a scan of the vein patterns in your hand — to authenticate.

Examples: They’re all but legion. Fingerprint readers are ubiquitous on business-class notebooks, and while iris scanners are less common, they’re enjoying broader deployment than they used to.

The good: Fingerprint recognition technology is widely available, cheap, well-understood, and easy for nontechnical users.

The bad: Despite all its advantages, fingerprint reading hasn’t done much to displace the use of passwords in places apart from where it’s mandated. Iris scanners aren’t foolproof, either. And privacy worries abound, something not likely to be abated once fingerprint readers become ubiquitous on phones.

The biometric smartphone
The idea: Use your smartphone, in conjunction with built-in biometric sensors, to perform authentication.

Examples: The Samsung Galaxy S5 and HTC One Max (pictured) both sport fingerprint sensors, as do models of the iPhone from the 5S onwards.

The good: Multiple boons in one: smartphones and fingerprint readers are both ubiquitous and easy to leverage, and they require no end user training to be useful, save for registering one’s fingerprint.

The bad: It’s not as hard as it might seem to hack a fingerprint scanner (although it isn’t trivial). Worst of all, once a fingerprint is stolen, it’s, um, pretty hard to change it.

The digital tattoo
The idea: A flexible electronic device worn directly on the skin, like a fake tattoo, and used to perform authentication via NFC.

Examples: Motorola has released such a thing for the Moto X (pictured), at a cost of $10 for a pack of 10 tattoo stickers, with each sticker lasting around five days.

The good: In theory, it sounds great. Nothing to type, nothing to touch, (almost) nothing to carry around. The person is the password.

The bad: So far it’s a relatively costly technology ($1 a week), and it’s a toss-up as to whether people will trade typing passwords for slapping a wafer of plastic somewhere on their bodies. I don’t know about you, but even a Band-Aid starts bothering me after a few hours.


MCTS Training, MCITP Trainnig

Best Microsoft MCTS Certification, Microsoft MCITP Training at certkingdom.com

 

 

Gartner: IT careers – what’s hot?

Do you know smart machines, robotics and risk analysis? Gartner says you should

ORLANDO— If you are to believe the experts here a the Gartner IT Symposium IT workers and managers will need to undergo wide-spread change if they are to effectively compete for jobs in the next few years.

Gartner 2014

Gartner: Top 10 Technology Trends for 2015 IT can’t ignore
Gartner: Top 10 strategic predictions for businesses to watch out for
Gartner: IT careers – what’s hot?
Gartner: Make way for digital business, risks or die?

How much change? Well Gartner says by 2018, digital business requires 50% less business process workers and 500% more key digital business jobs, compared to traditional models. IT leaders will need to develop new hiring practices to recruit for the new nontraditional IT roles.

“Our recommendation is that IT leaders have to develop new practices to recruit for non-traditional IT roles…otherwise we are going to keep designing things that will offend people,” said Daryl Plummer, managing vice president, chief of Research and chief Gartner Fellow. “We need more skills on how to relate to humans – the people who think people first are rare.”

Gartner intimated within large companies there are smaller ones, like startups that need new skills.

“The new digital startups in your business units are thirsting for data analysts, software developers and cloud vendor management staff, and they are often hiring them fast than IT,” said Peter Sondergaard, senior vice president and global head of Research. “They may be experimenting with smart machines, seeking technology expertise IT often doesn’t have.”

So what are the hottest skills? Gartner says right now, the hottest skills CIOs must hire or outsource for are:
Mobile
User Experience
Data sciences

Three years from now, the hottest skills will be:
Smart Machines (including the Internet of Things)
Robotics
Automated Judgment
Ethics

Over the next seven years, there will be a surge in new specialized jobs. The top jobs for digital will be:
Integration Specialists
Digital Business Architects
Regulatory Analysts
Risk Professionals


MCTS Training, MCITP Trainnig

Best Microsoft MCTS Certification, Microsoft MCITP Training at certkingdom.com

 

‘Bigger than Heartbleed’ Shellshock flaw leaves OS X, Linux, more open to attack

Well, this isn’t good. Akamai security researcher Stephane Chazelas has discovered a devastating flaw in the Unix Bash shell, leaving Linux machines, OS X machines, routers, older IoT devices, and more vulnerable to attack. “Shellshock,” as it’s been dubbed, allows attackers to run code on your machine after exploiting the flaw, but the true danger here lies in just how old Shell Shock is—this vulnerability has apparently been lurking in the Bash shell for years.

Why this matters: A large swath of the web-connected devices, web servers, and web-powered services run on Linux distributions equipped with the Bash shell, and Mac OS X Mavericks is also affected. The fact that Shellshock’s roots are so deep likely means that the vulnerability will still be found in unpatched systems for the foreseeable future—though the odds of it directly impacting you appear somewhat slim if you use standard security precautions.
MORE ON NETWORK WORLD: Free security tools you should try

Heartbleed redux

The news comes as the security community is just shaking off the effects of Heartbleed, a critical vulnerability in the widely used OpenSSL security protocol. “Today’s bash bug is as big a deal as Heartbleed,” says Errata Security’s Robert Graham, a respected researcher.

Hold your horses, Robert. Before we dive into dire warnings, let’s focus on the positive side of this story. Numerous Linux variants have already pushed out patches that plug Shellshock, including Red Hat, Fedora, CentOS, Ubuntu, and Debian, and big Internet services like Akamai are already on the case.

But Graham says Shellshock’s danger will nevertheless linger for years, partly because “an enormous percentage of software interacts with the shell in some fashion”—essentially making it impossible to know exactly how much software is vulnerable—and partly because of the vulnerability’s age.

“Unlike Heartbleed, which only affected a specific version of OpenSSL, this bash bug has been around for a long, long time. That means there are lots of old devices on the network vulnerable to this bug. The number of systems needing to be patched, but which won’t be, is much larger than Heartbleed.”

Now consider that more than two months after Heartbleed was disclosed, hundreds of thousands of systems remained vulnerable to the exploit.
Maybe not Heartbleed redux?

But don’t panic! (Or at least not yet.) While Heartbleed had the potential to be widely exploited, Jen Ellis of security firm Rapid7 says the Shellshock bug’s outlook isn’t quite as grim, even if it is rampant.

“The vulnerability looks pretty awful at first glance, but most systems with Bash installed will NOT be remotely exploitable as a result of this issue,” Ellis writes. “In order to exploit this flaw, an attacker would need the ability to send a malicious environment variable to a program interacting with the network and this program would have to be implemented in Bash, or spawn a sub-command using Bash.”

As a result, Ellis and Rapid7 urge keeping a level head about the bug.
“We’re not keen to jump on the ‘Heartbleed 2.0′ bandwagon. The conclusion we reached is that some factors are worse, but the overall picture is less dire… there are a number of factors that need to be in play for a target to be susceptible to attack. Every affected application may be exploitable through a slightly different vector or have different requirements to reach the vulnerable code. This may significantly limit how widespread attacks will be in the wild. Heartbleed was much easier to conclusively test and the impact way more widespread.”

While older Internet-connected devices (like, say, security cameras) seem to be likely victims of Shellshock, respected security researchers Michal Zalewski and Paul McMillan note that many embedded devices don’t actually use the Bash shell at all.

Beyond Linux-based systems, Graham and Ars Technica report that Mac OS X Mavericks contains a vulnerable version of Bash.

To test if your version of Bash is vulnerable to this issue, Red Hat says to run this command:

$ env x='() { :;}; echo vulnerable’ bash -c “echo this is a test”

If the system responds with the following, then you’re running a vulnerable version of Bash and you should apply any available updates immediately:

vulnerable
this is a test

“The patch used to fix this issue ensures that no code is allowed after the end of a Bash function,” Red Hat reports. So rather than spitting out “Vulnerable,” a protected version of Bash will spit out the following when you run the aforementioned command:

$ env x='() { :;}; echo vulnerable’ bash -c “echo this is a test” bash: warning: x: ignoring function definition attempt bash: error importing function definition for `x’ this is a test

What does this mean?

When it boils down to brass tacks, most major websites and modern gadgets you own likely won’t be affected by this Bash vulnerability, and Apple will no doubt patch the OS X implementation quickly. (Here’s a highly technical DIY fix for now.)

It’s impossible to know just how far this flaw reaches, and it’s likely to linger on in neglected websites, older routers, and some legacy Internet of Things devices—many of which are impossible to patch—providing an opening for determined hackers to sneak into those systems.

So what should you do? Here’s some actionable advice from security researcher Troy Hunt’s tremendous in-depth primer on Shellshock:

“In short, the advice to consumers is this: watch for security updates, particularly on OS X. Also keep an eye on any advice you may get from your ISP or other providers of devices you have that run embedded software. Do be cautious of emails requesting information or instructing you to run software – events like this are often followed by phishing attacks that capitalize on consumers’ fears.”

PCWorld’s guide to protecting your PC against devious security traps can help you I.D. bad actors, while Ian Paul has three tips for spotting malicious emails over at his Hassle-Free PC column.


 

MCTS Training, MCITP Trainnig

Best Microsoft MCTS Certification, Microsoft MCITP Training at certkingdom.com

 

 

 

Sneak Peek: New features coming to Internet Explorer

Microsoft’s new Developer Channel offers glimpse into upcoming features of IE.
Microsoft recently released a “Developer Channel” edition of Internet Explorer, launching a new way in which upcoming features will be previewed, and laying the groundwork for a business strategy focusing on web services. Here’s what you need to know about the future of Internet Explorer.

Developer Channel version offers sneak peek at new features
Though it’s available for the public to freely download and install, Internet Explorer Developer Channel is not meant for everyday use, whether business or casual. As its label implies, IE DC is primarily geared toward developers with which to play around. But anyone can try out the browser to see what new features are being worked on by the IE development team.

No more betas
Instead of releasing betas, the IE development team will update IE DC with the latest features, fixes and optimizations. Throughout this process, you’ll be able to keep up with the work-in-progress of IE by downloading the most current release of IE DC. When the IE team determines this code is ready for public consumption, it will then be rolled out as the next version of IE.

Compatibility is limited to Win 7/8.1
IE DC is available for Windows 7 and Windows 8.1 only. Either OS also must have Internet Explorer 11 installed on it. You should probably also ensure your Windows 7 or Windows 8.1 system has the latest official updates for the OS installed, as recommended by Windows Update, prior to installing IE DC.

Caveats
IE DC runs within a virtualization system, which keeps the browser in a “sandbox” operating separately from the rest of your Windows environment. This is for reasons of security. The consequences are that IE DC cannot share add-ons or settings that you already have in place with your installation of IE 11; IE DC may run slower than IE 11; and it cannot be used as the default browser.

Tracking features in development
The IE development team set up a web page where you can follow the latest features they’re working on to possibly add to future versions of IE. It also lists features that are already in the most recent final releases of the browser, and ones they are considering, but not officially developing yet. You can easily set this list to show only features that are in development, under consideration, under which version number of IE they first appeared, or their interoperability with the other major web browsers.

New features in IE DC
As of this writing, release of IE DC includes only a few new technologies being actively worked on. Two are interesting for the average user: GamePad and WebGL Instancing. They obviously tell that the IE development team is expanding the capabilities of the browser for gaming. (WebGL Instancing utilizes a system’s GPU, graphics processing unit, to more efficiently draw copies of an object without hitting up the system CPU for this task.) These technologies could alternately be integral for less leisurely pursuits, like using a controller to interact with a productivity web app.

Features in development
Other technologies listed as “In Development” (which also means they are not yet implemented into the actual IE DC browser) include Media Capture and Streams, and Web Audio. The first indicates a web app in IE would be able to access audio or video from your computer’s or device’s mic or webcam. Web Audio would enable a web app to produce audio through JavaScript.

Features that are being considered
Listed as “Under Consideration” are features that point to granting web apps even more access to control or receive feedback from the hardware of a computer or device (Ambient Light Events, Battery Status, Vibration). Web apps could also be allowed to encode audio or video from within the browser (MediaRecorder), incorporate speech recognition and synthesis (Web Speech), and manipulate the local files on a Windows system (Drag and Drop Directories, FileWriter).

End of numbered versions?
This new system of providing early looks at IE under a continuous development cycle could suggest Microsoft may de-emphasize version numbering. If this happens, then, as far as the general public is concerned, the upcoming 12th release of IE could be referred to by Microsoft as simply “Internet Explorer.” As for new features, IE appears to be becoming a more technologically capable browser for using with sophisticated web apps. The IE development team isn’t just looking to make a better browser; they’re aiming to make Internet Explorer a better web app platform.

 

Best Microsoft MCTS Training – Microsoft MCITP Training at Certkingdom.com

 

Let’s scuttle cybersecurity bachelor’s degree programs

It may sound counterintuitive, but the way to increase the number of cybersecurity professionals is not to start granting degrees in cybersecurity

I suppose it sounds logical.

We’re hearing that the best way to deal with the shortage of cybersecurity professionals is to funnel students into cybersecurity degree programs.

And while we’re at it, let’s address the problem of all those hackers who are thinking outside of the box by recruiting them for these degree programs.

Unfortunately, the logic of these statements is about a micron thick.

Let’s look at those cybersecurity degree programs first. In no other computing discipline do you have a specialized degree program. You do not earn a bachelor’s degree specifically in software engineering, computer graphics, artificial intelligence, database management, systems administration, Web applications programming or project management. Why should there be a bachelor’s degree specific to cybersecurity? (And please note that I am talking about undergraduate cybersecurity programs, not graduate-level programs.)

There shouldn’t be. Security professionals need to function in a variety of disciplines. They can be called upon to evaluate software for security vulnerabilities, to determine whether a user interface is suffering from information leakage, to design secure databases, to secure operating systems, to assess and shore up the security of websites, to incorporate security requirements into new developments and so on. The person you ask to do all of those things needs to be well rounded. But a cybersecurity degree program offers many security classes at the expense of classes that would normally be required to get a general degree in computer science or information systems.

With exceptions like architecture and nursing, bachelor’s degree programs are not intended to be trade schools. The best college degrees strive to help people have a broad understanding of not just their field, but culture in general. Personally, the skills that have helped me most in the cybersecurity field did not come from computer courses, but from the mandatory writing and business classes I took, which taught me to be a better communicator and how to determine what was valuable to decision-makers.

To paraphrase Jim Rohn, the value of going to college is not in the degree you are awarded, but in what you had to become to earn that degree.

My feelings about cybersecurity degree programs isn’t bias of the “that’s not how it was done in my day” variety. I sincerely believe that cybersecurity degree programs are producing graduates inadequately prepared for the positions they believe they are training for, and quite possibly compromised in their ability to get any job at all.

Consider the National Security Agency, a promoter of the cybersecurity degree movement and a highly coveted employer in the field. The NSA designates some cybersecurity degree programs as Centers of Excellence in Information Assurance Education. So, the graduates of those programs should have no problem getting hired by the NSA in a cybersecurity capacity, right? Well, maybe not. Take a look at the NSA’s cybersecurity professional development program. It wants people with strong programming skills. But many cybersecurity undergraduate programs do not offer any programming coursework. It’s been cut out to make room for more classes in things like writing security policies.

Now, a general degree in computer science can pretty much qualify a person for any entry-level position in the computer profession, including a cybersecurity position. But a person with a highly specific degree may have a problem getting a broader position. And I don’t think new graduates armed with a bachelor’s degree in cybersecurity are going to want to limit themselves to that relatively small subset of available jobs.

Think of it from a hiring manager’s perspective. She has an opening for a database manager and must choose between two candidates. One has a general CS degree, and his studies included classes in database management. The other has a cybersecurity degree, but though he says he can write a database management security policy, he never took a course in database management. Welcome aboard, CS graduate!

While you might contend that the cybersecurity graduate will look for the plethora of cybersecurity job openings, and not a database management position, this first assumes that the new graduate wants to limit themselves to a very specific, and small, subset of computer related job openings. Again, they will still be competing with general computer degree holders.
My Magic Wand

If I could wave a wand to fix the problem of a lack of information security knowledge in college graduates, I would have the NSA and other stakeholders invest their time and money not in developing Centers of Excellence, but in influencing computer science and information systems departments to incorporate security into all relevant courses and degree programs.

This is actually the direction recommended by the Association for Computing Machinery and the IEEE Computer Society in their most recent update to their recommended curriculums for computer science programs and for information systems programs.

Unfortunately, I recently reviewed introductory computer science courses from a wide variety of prestigious universities, and none of the courses that I looked at seemed to be implementing the guidance. Incidentally, in the course of doing some volunteer work, I spoke to some college officials about adding a security course to their curriculum. Next to impossible, they said, since curriculums go through lengthy approval processes. To get a course to include security, you have to find a textbook that covers the subject. Good luck. Few of the most popular textbooks used in computer science classes have even one chapter devoted to security, and many have no specific content. Some of the newer introductory IS textbooks cover security to some extent, but I have yet to see any detailed security content in textbooks for advanced courses.

So, magic wand, let the NSA and other organizations begin to write content for such textbooks, and then offer grants to colleges to enhance their curriculums.

The issue is to create not a handful of people who have a little extra specialized education, but to ensure that the future computer professional community, as a whole, at least has the fundamental knowledge to begin proactively securing their work products.
Thinking Inside the Box

And what about the idea that the graduates of cybersecurity programs should be drawn from students who somehow are better at thinking outside of the box? Quite simply, it is a notion that is grossly ignorant of what has actually been working for decades.

Until recently, the NSA had never hired anyone with a cybersecurity degree. And yet the NSA is widely considered to be the world’s leader in information security and information warfare. How then did the NSA establish such pre-eminence in the field?

It searched among its employees for high-caliber people and then cross-trained them. It is that simple. The NSA continues to do so in many fields, including information assurance.

But will cybersecurity degree programs give the NSA and other employers people who think outside of the box? And will such new graduates have an edge over experienced professionals? No; that is frankly delusional. The proponents of such nonsense argue that hackers are able to get through the strongest security countermeasures by dint of some unique thought processes.

Wrong. Teenagers have been able to break into systems not because of superior skills, but because the people running the systems in question have inadequate professional security training. The hackers aren’t thinking outside of the box; they are just thinking about the task at hand.

Skilled professionals are not usually asked to break into computer systems. As a rule, violating laws is not their task at hand. But look at what happens when you make it their job. When I recruit a new trainee for penetration testing, I look for the smartest, most experienced computer professional available — not a teenager. When I tell them what I want them to do, they’re generally shocked. They have never applied their skills to such a purpose. But after they get over the surprise, they do things that make my head spin. What they tend to do is to perfect the attacks that they have had experience repelling on a regular basis, and incorporate their detailed knowledge of operating systems gained from years of administering systems.

(Some IT professionals do indeed pursue such activities as part of their job, but we only catch glimpses of the successes of these U.S. government “hackers,” who break into highly secure foreign government systems, such as Iraqi air defense systems. They were also prepared to cripple the Iraqi financial system. There are also claims that U.S. cyberwarriors designed the Stuxnet virus to damage Iran’s nuclear capability. These hackers accomplish tasks that teenagers think are science fiction. Their exploits are just rarely publicized.)

But we give young hackers more than their due. Some people say we should harness their supposedly superior knowledge of security and recruit them to protect the systems they break into. Need I point out the absurdity of this idea? It is akin to thinking that just because some idiot is capable of stealing a car and crashing it into a wall, he should have the skills to fix the damage. I’m sorry, but anyone claiming that the idiot could fix the car should likewise be thought an idiot. It is exponentially easier to break something than it is to fix it, especially when computers are concerned.
The System Ain’t Broke

I find the idea that what the U.S. government really needs is a crop of new cybersecurity graduates to be insulting to the hundreds of thousands of current government computer professionals. The government needs to stop this nonsense and focus on expanding programs to cross-train highly skilled and immediately available workers.

Similarly, private organizations need to properly invest in their staffs. Just as they expect to train new employees in their job functions, they need to expect to have to invest in the training of their cybersecurity professionals.

What we need are not a bunch of cybersecurity degree holders, but a willingness to invest in current employees. Employees who earned a broad-based CS degree and then gained years of experience on the job are quite simply a better resource than a green graduate.

Don’t get me wrong. I have nothing but admiration for the young people who are pursuing cybersecurity degrees. Most of these degree programs are tailored to part-time students, who usually have to juggle full-time jobs, coursework and a family life during a program that can take more than seven years to complete. That demonstrates true character and perseverance, which is more important than skills. However, a breadth of knowledge is still more important than the topic of the degree.

Unfortunately, the colleges are often selling these people hype, not reality. For example, one college is telling people that they are training them to be cyberwarriors, while the actual coursework teaches them to write security policies, not to be hands-on practitioners. This is like telling someone that you are training him to be a Navy SEAL, while you are only training him in logistics, qualifying him at best to be a quartermaster for the SEALs.

When you come right down to it, though, there is little in the world of information security that is more valuable than experience. And new graduates nearly always lack it to any significant degree. Just think about someone who takes a class in security policy. Say there are 15 class sessions that average three hours each. Then let’s generously assume that the student does 115 hours of work outside of class. By putting in 160 hours, the student can rightly be said to have worked hard for his grade. But all that time is still the equivalent of just four workweeks. Would you trust someone with that level of experience to develop a policy document for a large office or to meet some regulatory compliance standard? Clearly not. It is nice that they have this experience, but it just makes them better than a person with no experience at all.

Undergraduates don’t have expertise in their major; they have a slightly enhanced background. As for being qualified to combat the most elite hackers in the world, well, what exactly in a degree program that focuses on policies is preparing you to take on the hackers?

If the NSA and other parties want to reward promising students with scholarships for studying cybersecurity, then they need to think long and hard about what they expect to gain from such programs.

Scholarships are great. I believe in giving a hand to young people who show aptitude. But highly targeted scholarships can go wrong when the grantors expect to get certain results in return. And just consider some of the ways they could be disappointed in the results of their cybersecurity scholarship programs.

First of all, up to 80% of college students change their majors in college at least once. This means that as many as 80% of the people who receive cybersecurity scholarships are likely to not want to be in the cybersecurity profession by the time that they earn their undergraduate degrees.

Worse, in a way, are the incompatible goals of an organization such as the NSA. It wants to give cybersecurity scholarships in particular to young people who have a tendency to think outside of the box. The funny thing about young people who think outside of the box: They often do things that will disqualify them for the security clearance they will need to get a job at the NSA.
Opinion by Ira Winkler

Let’s say that they are encouraged to develop their hacking skills. Will they resist the urge to use those skills, or will they do something like join up with Anonymous? If they do, the NSA is not going to get the benefit of their education in cybersecurity. Even more common, though, are young people who download music and other intellectual property illegally. I have heard that this has become a reason for denying clearances. What I hear is that there is a floor in the value of what was downloaded for a clearance to be denied. OK, but students who were selected because they are on the edge are probably more likely than other students to breach that floor.

When you come right down to it, there is more than a little bit of wishful thinking in this entire drive toward granting cybersecurity degrees. This is actually a case where the thing that we have been doing for years, specifically taking high-caliber people and cross-training them for cybersecurity roles, is a better approach than what has been proposed to replace it. It puts highly skilled people to immediate use, solving immediate problems. We simply have to fully commit ourselves to expanding a proven model, instead of grasping on to what is literally a science fiction plot and hoping we will get results many years from now.

 


MCTS Training, MCITP Trainnig

Best Microsoft MCTS Certification, Microsoft MCITP Training at certkingdom.com