Tag Archives: open-source

The new struggles facing open source

The religious wars have faded, as new conflicts around control, code ‘sharecropping,’ ‘fauxpen source,’ and n00b-sniping arise

The early days of open source were fraught with religious animosities we feared would tear apart the movement: free software fundamentalists haggling with open source pragmatists over how many Apache licenses would fit on the head of a pin. But once commercial interests moved in to plunder for profit, the challenges faced by open source pivoted toward issues of control.

While those fractious battles are largely over, giving way to an era of relative peace, this seeming tranquility may prove more dangerous to the open source movement than squabbling ever did.
[ Explore the top 10 rookie open source projects of 2015, the most exciting new ventures percolating today. | Stay atop the latest developments in open source with InfoWorld’s Open Source newsletter. ]

Indeed, underneath this superficial calm, plenty of tensions simmer. Some are the legacy of the past decade of open source warfare. Others, however, break new ground and arguably threaten open source far more than the GPL-vs.-Apache battle ever did.
How we got here: From purity to profit

The different sides used to be clear. Richard Stallman chaired the committee on free software purity while Eric S. Raymond inspired the open source movement.

Both sides rigidly held to their cause. And both sides draped themselves in a different licensing flag: GPL for the free software purists, BSD/Apache for the open sourcerors.

Not surprising, the increasing popularity of both camps stirred significant financial interest; thus, the profit motive came to open source. VCs prowled for projects with enough downloads to justify a support-and-service business model. Companies like Alfresco, JBoss, XenSource, and Zimbra sprang up to capitalize on the industry’s interest in open source, with developers increasingly wary of their be-suited new neighbors.

As these startups grew toward IPOs, however, the support-and-service model ran out of gas, as 451 Research analyst Matt Aslett warned. Then began the “open source plus proprietary add-ons” era of open source, with companies building “enterprise versions” of open source projects, withholding features for paid subscribers. The dreaded Open Core model was born, and the industry set out to tear itself apart over accusations of bait-and-switch and proprietization of open source.
The era of milquetoast open source

Excoriating fellow open source proponents on a grand stage over grand themes seems at this point a figment of the past. Infighting has become more contained, almost on a project-by-project basis. The GPL has steadily diminished in importance as developers have opted for the laissez-faire approach of Apache-style licensing. Commercial interests run rampant in open source. It’s how open source is done these days — which may be the fundamental issue facing open source today.

As free software advocate Glyn Moody argues, a certain amount of tension in open source is desirable because a lack of tension “means people don’t care anymore.” He’s right, but what belies this semblance of open source as a happy, if bland, family today is a shift away from passionate arguments about freedom and toward a more calculated conflict over control.
The rise of the company man

Control as a central issue for open source finds its roots in past debates over Open Core. While free sourcers and open sourcerors might have disagreed on the optimal license to guide a development community, both aligned on the need to keep corporate interests from controlling a project’s community. This mistrust of corporate influence over open source code persists to this day, but as it turns out, corporate influence — and control — is both a blessing and a curse.

While 12.4 percent of development on the Linux kernel is done by unaffiliated developers, presumably out of the kindness of their hearts, most of the kernel is written by developers paid by Intel, Red Hat, and others. While I’m sure they would like to contribute regardless of a paycheck, the reality is that most can’t afford to write software for fun.

This principle applies to most any open source project of any significance. OpenStack? HP, Red Hat, and Mirantis combine for nearly 50 percent of all code contributions. Apache Software Foundation projects like Cassandra (Facebook, DataStax, and so on), Hadoop (Cloudera, Hortonworks, MapR), and others all depend heavily on corporate patronage.

Open source software, in other words, may be free to use, but it’s not free to build.

Still, some dislike the corporate influence for another, more troublesome reason. “I think pretty soon we’re going to see how bad it is when every successful [open source] project is backed by a company, most of which fail,” declares Puppet Labs founder and CEO Luke Kanies.

Kanies makes an astute point: A project may be very successful, but that won’t necessarily translate into a financial bonanza for its primary contributors. If the company owns the copyright and other intellectual property rights behind a project, then fails — well, the dot-org fails with the dot-biz.

That’s one major reason we’ve seen foundations become such a big deal. Foundations, however, are not without their issues.
Cloaking corporate interests in foundational garb

In the past few years, foundations have become the vanity plate of corporate open source. While some companies successfully push code to a true community-led foundation (OpenStack comes to mind), others use foundations as a facade for “fauxpen source.”

One recent example is the Open Data Platform, which amounts to a gathering of big companies trying to fund Hadoop distributions that rival Cloudera and MapR. As Gartner analysts Merv Adrian and Nick Heudecker see it, ODP “is clearly for vendors, by vendors,” and they rightfully worry that “[b]asing an open data platform on a single vendor’s packaging casts some doubt on ‘open.'”

Not that ODP is alone in this. Plenty of foundations essentially serve the interests of a single vendor, whatever their ability to gather a few heavy-pocketed friends to go through the motions of “community.”

Like the OpenCore concerns of the first 10 years of open source, corporate foundations rub raw the free spirits in the open source world, because such foundations set up an asymmetric power structure. It makes little difference if copyright assignment flows to a single company or a foundation led by a single company, the effect is the same: The would-be contributor amounts to a particularly powerless digital sharecropper.

This isn’t the only tension in foundation land.
Controlling the code

One of the primary reasons for going to a foundation is to make project governance open and predictable. Many projects, however, eschew governance or licensing altogether. The so-called GitHub generation has been happy to load the code repository with software of unknown licensing pedigree. While GitHub has been trying to reverse this trend toward license-free development, it persists.

Even where a license exists, GitHub “communities” stand in contrast to more formal foundations. In the latter, governance is central to its existence. In the former, relatively no governance exists.

Is this bad?
As Red Hat chief architect Steve Watt notes, “Obviously, the project author is entitled to that prerogative, but the model makes potential contributors anxious about governance.”

In other words, we don’t worry as much anymore about a project’s license, which was the way corporations would seek to control use of the code. Control of projects has shifted from the code itself to governance around the code.

But it’s not only The Man that makes open source a minefield.
With communities like this …

The final, and perhaps most entrenched, tension facing open source today stems from a problem we’ve always had, but which has become more pronounced in the past few years: The open source welcome committee is not always welcoming.

It has always been the case that some projects have leaders who can be fearsome to cross. Anyone who has had Linus Torvalds tell them, “*YOU* are full of bull—-,” knows that open source requires a thick skin.

But things have gotten worse.

No, not because project leads are increasingly rude or callous, but because there are far more newbies in any given project. As one HackerNews commenter notes, “[S]mall projects get lots of, well, basically useless people who need tons of hand-holding to get anything accomplished. I see the upside for them, but I don’t see the upside for me.”

Dealing with high volumes of would-be contributors with limited experience strains the patience of the best of leaders, and well, sometimes those leaders aren’t the best, as this broadside from OpenLDAP’s Howard Chu shows:

If you post to this list and your message is deemed off-topic or insufficiently researched, you *will* be chided, mocked, and denigrated. There *is* such a thing as a stupid question. If you don’t read what’s in front of you, if you ignore the list charter, or the text of the welcome message that is sent to every new subscriber, you will be publicly mocked and made unwelcome.

As one example, half of all contributors to the Linux kernel in the past year are new contributors. This same phenomenon is playing out across the industry, and “Newbies Not Welcome!” signs like Chu’s aren’t a great way to accommodate the influx of those who want to participate but don’t yet know how.

Ultimately, open source isn’t about code. It’s about community, and as Bert Hubert suggests, “community is the best predictor of the future of a project.” That community isn’t fostered by jerk project leads or corporate overlords pretending to be friendly foundations. It’s the heart of today’s biggest challenges in open source — as it was in the last decade.


MCTS Training, MCITP Trainnig

Best Microsoft MCTS Certification, Microsoft MCITP Training at certkingdom.com

SDN in 2014: A year of non-stop action

Review of dozens of SDN moves may hint at what’s in store for 2015

The past year was a frantic one in the SDN industry as many players made strategic and tactical moves to either get out ahead of the curve on software-defined networking, or try to offset its momentum.

December
Juniper unveils a version of its Junos operating system for Open Compute Platform switches, commencing a disaggregation strategy that’s expected to be followed by at least a handful of other major data center switching players in an effort to appeal to white box customers.

November
Cisco declares “game over” for SDN competitors, and perhaps the movement itself, prompting reaction from two industry groups that the game has just begun; Alcatel-Lucent and Juniper also virtualize their routers for Network Functions Virtualization (NFV) requirements; AT&T and other unveil ONOS, an open source SDN operating system viewed as an alternative to the OpenDaylight Project’s code.

October
Cisco joins the Open Compute Project, 16 months after criticizing it as a one-trick white box commodity pony that has “weaknesses” and is destined to “lose;” Internet2 demonstrates a nationwide virtualized multitenant network, formed from SDN and 100G, that operates as multiple discrete, private networks; increased competition, largely as a result of VMware’s $1.26 billion acquisition of network virtualization start-up Nicira, goads Cisco into selling most of its stake in the VCE joint venture to EMC; Dell increases its participation in OpenDaylight after initially having doubts about the organization’s motivations; Start-up SocketPlane emerges to establish DevOps-defined networking; Cisco invests $80 into a cloud venture with Chinese telecom vendor TCL.

September
Cisco boosts its Intercloud initiative, an effort to interconnect global cloud networks, with 30+ new partners, 250 more data centers, and products to facilitate workload mobility between different cloud providers; HP opens its SDN App Store; Brocade becomes perhaps the first vendor to unveil an OpenDaylight-based SDN controller; Cisco loses two key officials in its Application Centric Infrastructure and OpenStack efforts; Cisco acquires OpenStack cloud provider Metacloud; Infonetics Research says the SDN market could hit $18 billion by 2018; SDN’s contribution to the Internet of Things becomes clearer.

July
A Juniper Networks sponsored study finds 52.5% yay, 47.5% nay on implementing SDNs; Cisco ships its ACI controller, and announces pricing and packaging of its programmable networking lineup; The IEEE forms a 25G Ethernet study group after a number of data center switching vendors with considerable operations in SDN and cloud form a consortium to pursue the technology; Big Switch Networks unveils its Cloud Fabric controller; The Open Networking User Group establishes working groups to address what it sees as the biggest pain points in networking, and issues a white paper describing the current challenges and future SDN needs; After initially claiming it wasn’t SDN, Cisco now says ACI is the “most complete” SDN; Cisco says its acquisition of cloud orchestrator Tail-f will complement its own Intelligent Automation for Cloud product.

June
Facebook unveils its homegrown “Wedge” SDN data center switch; Cisco acquires cloud orchestrator Tail-f, which gives it entrée into AT&T’s SDN project; HP unveils an SDN switch with a midplane-free chassis, similar to Cisco’s Nexus 9500; Market researchers find that SDN “hesitation” is slowing spending on routers and switches; Avaya, citing its experience at the Sochi Winter Olympic Games, describes a plan to ease implementation of SDN and other environments using its fabric technology.

May
HP clarifies its views on open source SDNs; A Goldman Sachs report concludes that Cisco’s ACI provides a 3X better total cost of ownership than VMware NSX; Cisco CEO Chambers dashes talk of Cisco acquiring cloud provider Rackspace; Cisco offers products to allow earlier generation Nexus switches to participate in a programmable ACI environment; SDN prompts more questions than answers at a Network World conference; Seven months after dismissing OpenDaylight and open source SDNs, HP raises its investment and participation in OpenDaylight; Cisco’s Noiro Networks open source project is revealed as a contributor to a policy blueprint approved for the OpenStack Neutron networking component.

April
CloudGenix debuts as the latest SDN start-up targeting enterprise WANs; Michael Dell shares his views on SDNs after his namesake company allies with SDN companies Big Switch Networks and Cumulus Networks; Juniper appears ready to accept OpenDaylight after initially dismissing it when it develops a plugin to link its own OpenContrail SDN controller to the open source code; Cisco and VMware take the SDN battle to the policy arena; Cisco unveils the OpFlex policy protocol, largely viewed as an alternative to OpenFlow and other southbound protocols, for ACI and SDNs.

March
New certifications are expected as SDN takes hold in the networking industry; three years after pledging not to enter cloud services and compete with its customers, Cisco enters cloud services through its $1 billion Intercloud initiative; Dell unveils a fabric switch and SDN controller designed to scale and automate OpenStack clouds; Cisco rolls out new chassis configurations for its Nexus 9000 switches, the hardware underlay of its ACI programmable networking response to SDN; OpenDaylight commissioned study concludes that everyone wants open source SDNs; Cumulus garners additional support for its bare metal NOS; SDN preparation may require 11 steps; Goldman Sachs says there’s nothing really new to SDNs; AT&T, NTT and others share SDN implementation experiences at Open Networking Summit 2014; Brocade becomes an early provider of OpenFlow 1.3; NEC looks to scale OpenFlow SDNs.

February
HP Networking head Bethany Mayer is tapped to lead the company’s new Network Functions Virtualization effort; Juniper expands its carrier SDN portfolio with controller and management products at Mobile World Congress; Research finds that enterprise adoption of SDNs lags that of service providers due to several factors, primarily the criticality of the network itself; Big Switch explains why it is optimistic after rebooting its SDN business; OpenDaylight announces that its “Hydrogen” SDN release is now available, after a delay; SDN start-up Pluribus Networks ships its server-switch product.

January
IBM is reported to be looking to sell its SDN business for $1 billion; JP Morgan downgrades Cisco stock based on challenges in emerging markets, and on the potential impact of SDNs; Cisco announces ACI Enterprise Module, a version of its ACI SDN controller for enterprise access and WAN programmability; ACG Research finds that sales of SDN products for live service provider deployments will reach $15.6 billion by 2018, while those that have live deployment potential will reach $29.5 billion; SDN startup Anuta Networks unveils a network services virtualization system for midsize and large enterprises; Reports surface that an SDN schism has developed at Juniper, pitting Junos and OpenDaylight programmers against CTO and Founder Pradeep Sindhu and prompting the exit of many engineers; AT&T determines that Cisco’s ACI is too complex and proprietary for its Domain 2.0 SDN project, according to an investment firm’s report.


MCTS Training, MCITP Trainnig

Best Microsoft MCTS Certification, Microsoft MCITP Training at certkingdom.com

 

 

Open sources software’s are expensive than Microsoft

Microsoft cheaper to use than open source software, UK CIO says

British government says every time they compare FOSS to MSFT, Redmond wins.

 

A UK government CIO says that every time government citizens evaluate open source and Microsoft products, Microsoft products forever come out cheaper in the long run.

 

Jos Creese, CIO of the Hampshire County Council, told Britain’s “Computing” publication that part of the cause is that most staff are already familiar with Microsoft products and that Microsoft has been flexible and more helpful.

 

“Microsoft has been flexible and obliging in the means we apply their products to progress the action of our frontline services, and this helps to de-risk ongoing cost,” he told the publication. “The tip is that the true charge is in the totality cost of ownership and exploitation, not just the license cost.”

 

Creese went on to say he didn’t have a particular bias about open source over Microsoft, but proprietary solutions from Microsoft or any other commercial software vendor “need to justify themselves and to work doubly hard to have flexible business models to help us further our aims.”

 

He approved that there are troubles on together sides. In some cases, central government has developed an undue dependence on a few big suppliers, which makes it hard to be confident about getting the best value out of the deal.

 

On the other hand, he is leery of depending on a small firm, and Red Hat aside, there aren’t that many large, economically hard firms in open source like Oracle, SAP, and Microsoft. Smaller firms often offer the greatest innovation, but there is a risk in agreeing to a significant deal with a smaller player.

 

“There’s a huge dependency for a large organization using a small organization. [You need] to be mindful of the risk that they can’t handle the scale and complexity, or that the product may need adaptation to work with our infrastructure,” said Creese.

 

I’ve heard this argue before. Open source is cheaper in gaining costs not easy to support over the long run. Part of it is FOSS’s DIY ethos, and bless you guys for being able to debug and recompile a complete app or distro of Linux, but not everyone is that smart.

 

The extra problem is the lack of support from vendors or third parties. IBM has done what no one else has the power to do. 20 after Linus first tossed his creation on the Internet for all to use, we still don’t have an open source equivalent to Microsoft or Oracle. Don’t say that’s a good thing because that’s only seeing it from one side. Business users will demand support levels that FOSS vendors can’t provide. That’s why we have yet to see an open source Oracle.

 

The part that saddens me is that reading Creese’s interview makes it clear he has more of a clue about technology than pretty much anyone we have in office on this side of the pond.
b3

Best Microsoft MCTS Certification, Microsoft MCP Training at certkingdom.com

Weighing the IT implications of implementing SDNs

Software-defined anything has myriad issues for data centers to consider before implementation

Software Defined Networks should make IT execs think about a lot of key factors before implementation.

Issues such as technology maturity, cost efficiencies, security implications, policy establishment and enforcement, interoperability and operational change weigh heavily on IT departments considering software-defined data centers. But perhaps the biggest consideration in software-defining your IT environment is, why would you do it?
Prove to me there’s a reason we should go do this, particularly if we already own all of the equipment and packets are flowing. We would need a compelling use case for it.
— Ron Sackman, chief network architect at Boeing

“We have to present a pretty convincing story of, why do you want to do this in the first place?” said Ron Sackman, chief network architect at Boeing, at the recent Software Defined Data Center Symposium in Santa Clara. “If it ain’t broke, don’t fix it. Prove to me there’s a reason we should go do this, particularly if we already own all of the equipment and packets are flowing. We would need a compelling use case for it.”

[WHERE IT’S ALL GOING: VMware adds networking, storage to its virtual data center stack]

And if that compelling use case is established, the next task is to get everyone onboard and comfortable with the notion of a software-defined IT environment.

“The willingness to accept abstraction is kind of a trade-off between control of people and hardware vs. control of software,” says Andy Brown, Group CTO at UBS, speaking on the same SDDC Symposium panel. “Most operations people will tell you they don’t trust software. So one of the things you have to do is win enough trust to get them to be able to adopt.”

Trust might start with assuring the IT department and its users that a software-defined network or data center is secure, at least as secure as the environment it is replacing or founded on. Boeing is looking at SDN from a security perspective trying to determine if it’s something it can objectively recommend to its internal users.

“If you look at it from a security perspective, the best security for a network environment is a good design of the network itself,” Sackman says. “Things like Layer 2 and Layer 3 VPNs backstop your network security, and they have not historically been a big cyberattack surface. So my concern is, are the capex and opex savings going to justify the risk that you’re taking by opening up a bigger cyberattack surface, something that hasn’t been a problem to this point?”

Another concern Sackman has is in the actual software development itself, especially if a significant amount of open source is used.

“What sort of assurance does someone have – particularly if this is open source software – that the software you’re integrating into your solution is going to be secure,” he asks. “How do you scan that? There’s a big development time security vector that doesn’t really exist at this point.”

Policy might be the key to ensuring security and other operational aspects in place pre-SDN/SDDC are not disrupted post implementation. Policy-based orchestration, automation and operational execution is touted as one of SDN’s chief benefits.

“I believe that policy will become the most important factor in the implementation of a software-defined data center because if you build it without policy, you’re pretty much giving up on the configuration strategy, the security strategy, the risk management strategy, that have served us so well in the siloed world of the last 20 years,” UBS’ Brown says.

Software Defined Data Center’s also promise to break down those silos through cross-function orchestration of the compute, storage, network and application elements in an IT shop. But that’s easier said than done, Brown notes – interoperability is not a guarantee in the software-defined world.

“Information protection and data obviously have to interoperate extremely carefully,” he says. The success of software defined workload management – aka, virtualization and cloud – in a way has created a set of children, not all of which can necessarily be implemented in parallel, but all of which are required to get to the end state of the software defined data center.

“Now when you think of all the other software abstraction we’re trying to introduce in parallel, someone’s going to cry uncle. So all of these things need to interoperate with each other.”

So are the purported capital and operational cost savings of implementing SDN/SDDCs worth the undertaking? Do those cost savings even exist?

Brown believes they exist in some areas and not in others.
We’ve got massive cost targets by the end of 2015 and if I were backing horses, my favorite horse would be software-defined storage rather than software-defined networks.
— Andy Brown

“There’s a huge amount of cost take-out in software-defined storage that isn’t necessarily there in SDN right now,” he said. “And the reason it’s not there in SDN is because people aren’t ripping out the expensive under network and replacing it with SDN. Software-defined storage probably has more legs than SDN because of the cost pressure. We’ve got massive cost targets by the end of 2015 and if I were backing horses, my favorite horse would be software-defined storage rather than software-defined networks.”

Sackman believes the overall savings are there in SDN/SDDCs but again, the security uncertainty may make those benefits not currently worth the risk.

“The capex and opex savings are very compelling, and there are particular use cases specifically for SDN that I think would be great if we could solve specific pain points and problems that we’re seeing,” he says. “But I think, in general, security is a big concern, particularly if you think about competitors co-existing as tenants in the same data center — if someone develops code that’s going to poke a hole in the L2 VPN in that data center and export data from Coke to Pepsi.

“We just won a proposal for a security operations center for a foreign government, and I’m thinking can we offer a better price point on our next proposal if we offer an SDN switch solution vs. a vendor switch solution? A few things would have to happen before we feel comfortable doing that. I’d want to hear a compelling story around maturity before we would propose it.”


MCTS Training, MCITP Trainnig

Best Microsoft MCTS Certification, Microsoft MCITP Training at certkingdom.com

 

 

Dangerous Linux Trojan could be sign of things to come

RSA expert details “Hand of Thief” banking Trojan

Desktop Linux users accustomed to a relatively malware-free lifestyle should get more vigilant in the near future – a researcher at RSA has detailed the existence of the “Hand of Thief” Trojan, which specifically targets Linux.

According to cyber intelligence expert Limor Kessem, Hand of Thief operates a lot like similar malware that targets Windows machines – once installed, it steals information from web forms, even if they’re using HTTPS, creates a backdoor access point into the infected machine, and attempts to block off access to anti-virus update servers, VMs, and other potential methods of detection.

Hand of Thief is currently being sold in “closed cybercrime communities” for $2,000, which includes free updates, writes Kessem. However, she adds, the upcoming addition of new web injection attack technology will push the price to $3,000, and introduce a $550 fee for major version updates.

“These prices coincide with those quoted by developers who released similar malware for the Windows OS, which would make Hand of Thief relatively priced way above market value considering the relatively small user base of Linux,” she notes.

Getting Linux computers infected in the first place, however, could be more problematic for would-be thieves – Kessem says the lack of exploits targeting Linux means that social engineering and email are the most likely attack vectors, citing a conversation with Hand of Thief’s sales agent.

Kessem also says that growth in the number of desktop Linux users – prompted, in part, by the perceived insecurity of Windows – could potentially herald the arrival of more malware like Hand of Thief, as the number of possible targets grows.

Historically, desktop Linux users have been more or less isolated from the constant malware scares that plague Windows, which is at least partially a function of the fact that their numbers represent a tiny fraction of the Windows install base.

Users of Linux-based Android smartphones, however, have become increasingly tempting targets for computer crime – and with the aforementioned growth in desktop users, the number of threats may increase even further.


MCTS Training, MCITP Trainnig

Best Microsoft MCTS Certification, Microsoft MCITP Training at certkingdom.com

Google Android roundup: Why did JBQ leave AOSP?

Android news/rumors: The end of an era, plus giant robots annoyed as LG removes “optimus” title from latest release, Android’s continued domination and why people think it’s doomed, and a Moto X engineer hates back on critics

The Android realm is not a physical place, else we would have seen flags flying at half-mast and heard announcements made over school loudspeakers – Jean-Baptiste Quéru, godfather of the Android Open Source Project and one of the most influential figures in the ongoing development of the platform, abruptly stepped down from his position as AOSP maintainer this week.

Though JBQ, as he’s generally known, didn’t give explicit reasons for the move, the clever people over at Android Police quickly connected the dots from some of his recent Twitter activity, which bemoaned legal interference in the AOSP release process. Specifically, Quéru’s frustrations about being barred from releasing critical binaries for the new-model Nexus 7 tablet appear to have boiled over.

What’s strongly implied by the Android Police analysis is that Qualcomm, which makes the chipset for the new Nexus 7, has been making it impossible to get fully open-source versions of the software to work properly, withholding code essential for hardware support.

In a subsequent Google+ post, Quéru more or less confirmed this.

“Well, I see that people have figured out why I’m quitting AOSP,” he wrote. “There’s no point being the maintainer of an Operating System that can’t boot to the home screen on its flagship device for lack of GPU support, especially when I’m getting the blame for something that I don’t have authority to fix myself and that I had anticipated and escalated more than 6 months ahead.”

The reaction from the community has been generalized dismay, with sorrowful posts highlighting JBQ’s importance to AOSP and Android in general, as well as widespread rancor directed at Qualcomm.

AOSP’s curiously bifurcated nature – the underlying OS is open-source, but Google can’t distribute the fully open version for a given device unless the OEM gives permission to distribute its proprietary binaries – always makes this sort of issue a bit hazy and complex, but it’s hard to avoid the conclusion that Quéru had every right to be upset. Given that anyone can simply grab the closed-source binaries from the device itself, refusing to give AOSP permission to distribute is puzzling, to say the least.

While the usual caveats about unconfirmed information apply – Quéru himself seems to have some legal obligations that prevent him from speaking explicitly on the subject – it certainly seems as though JBQ’s exit should have been avoidable, and it’s a shame that it wasn’t. Android Authority says it’s “unlikely” that he’ll actually leave Google, but AOSP has nonetheless lost a father figure.

* Speaking of Qualcomm, their latest Snapdragon 800 is powering the just-announced LG G2, according to the many tech blogs that got an early hands-on with the device. In contrast to the recently released Moto X, the G2 is a much more traditional Android flagship – an outsized, feature-packed whopper of a phone, with as many megapixels, GB and GHz as can possibly be crammed into its considerable frame.

From a design perspective, the G2’s big innovations are having lost LG’s well-worn “Optimus” moniker and putting some of the controls – including the power and volume keys – on the back of the phone instead of somewhere on the side. I have no idea if this is a silly gimmick or a revolutionary answer to the problem of oversized smartphones – and I won’t until I actually get my hands on one – but it’s at least a creative attempt.

* The latest smartphone market share report from IDC says that Android’s global smartphone market share has risen to nearly 80% – up from just below 70% a year before. Sound like great news for Android, right?

Not so fast, says comScore. In the U.S., at least, Android subscriber numbers were flat during 2013’s second quarter, while Apple’s rose slightly. The Guardian also cites a Yankee Group study as saying that Android’s market dynamics indicate that Apple will retake the lead next year.

While they’ve obviously done their homework more assiduously than I have – which is to say, they’ve done some homework – I still have a hard time seeing Android losing too much ground back to That Other Smartphone absent a massively successful launch of the next-gen iPhone. Given that the last couple of iterations haven’t quite matched the stratospheric heights reached by their predecessors, that’s far from a guarantee.

Still, the U.S. market is more heavily Apple-centric than that of the world in general – more like 52% to 40%, according to the aforementioned numbers from comScore, so Apple’s still within striking distance.

* After the Moto X took some lumps on Twitter about its slightly-less-than-cutting-edge specs, Motorola designer Iqbal Arshad slammed critics in an interview with ZDNet.

He said that comparing raw specs misses the point, asserting that the Moto X is architected so differently that such measurements are meaningless.

“So it’s hard to understand because you’re comparing architectures that are fundamentally different. It’s kind of like people who are looking at a Tesla electric car and expecting it to have a V-8 engine. When you talk about an electric motor, it’s hard for people who are used to comparing specs on traditional cars to understand how it truly compares, because it’s completely different,” he said.

He would say that, of course, given that his company is the one charging the same price for less powerful hardware, but he has a point – the Moto X’s voice command and power-saving technologies are a bit more compelling than the avalanche of goofy camera modes. Still, if you’re just in it for pure performance, the ability to say “OK Google, advise me on purchasing decisions” or whatever probable doesn’t cut it for you.


MCTS Training, MCITP Trainnig

Best Microsoft MCTS Certification, Microsoft MCITP Training at certkingdom.com