Get a great Linux dedicated server for less than $4 a day!
Share on Twitter
September 2, 2011
So far, 2011 started with some radical and even controversial changes to prepare Ubuntu for touch-based consumer
computing, and now the company is getting ready for the transition into Cloud on Servers. And the change should be
a smooth one as it is for most companies involved in the Cloud Segment of the IT industry.
The Linux community has released the first beta of Ubuntu 11.10, codenamed Oneiric Ocelot and expected as finished
code for potential download in October.
The deployment and management of clouds and cloud-based workloads running Ubuntu Linux and CentOS Linux on server
hardware isn't exactly new. Sun Hosting fully deployed its Cloud-Based Hosting Solutions in March 2010, and now the company
is busy deploying version 2.0 of its Cloud Enterprise Solutions that will be ready in November.
The beta of Ubuntu Server includes Orchestra, which allows you to provision, deploy, host, manage and orchestrate
enterprise data center infrastructure services.
According to one Linux application developer, OpenStack is Orchestra's foremost workload. Orchestra features separate
servers for provisioning, management, monitoring and logging of applications, servers and workloads.
Orchestra, meanwhile, is tightly integrated with the Ocelot beta's other big push towards clustered servers running
Ubuntu Linux as a cloud platform-- Ubuntu Ensemble, which is designed to handle service deployment and orchestration for
cloud and on bare metal.
Orchestra is billed as something that will bring 'Dev Ops' to clouds and data centres running Ubuntu. Devops is the
voguish term for trying to bridge the gap between the development and management of applications that was once called
application lifecycle management by marketing people.
Ensemble provides a set of best practices and formulas to help ensure that Linux apps running on a server will
operate in the same way once in the cloud. In that case, the cloud in question would be an Amazon or
Sun Hosting-compatible service.
Ubuntu 11.10 is due to hit a second beta on September 22nd and its final release on October 13, 2011.
But whatever happens in this edition it's a relative sideshow compared to the overhaul of the interface in April with
Ubuntu 11.04 that demoted Gnome. Compared to next year's main event, Ubuntu 12.04 is due on April 26, 2012.
Ubuntu 12.04 will be a Long-Term-Support (LTS) edition meaning it sets the look, feel and technical direction of
successive versions for the next two-year period. The last LTS was 10.04 in April 2010, which saw Canonical chief Mark
Shuttleworth's Apple love manifest itself through Ubuntu's current OS-X-like interface and through the integration of
the PC distro with online music and backup services provided for Ubuntu users by Canonical.
Another company that's been very busy in the Cloud segment over the past year is Avantex. The company
last month announced the launch of its Enterprise Cloud Solutions. And the company will soon make another announcement on the progress of two more Cloud Solutions.
As always, Linux News Today will keep you posted on these and other developments in the Linux community.
In other technology news
ISPs and various networking companies have been working hard lately in an effort to
accelerate the speed of data moving on the Internet today. Three such companies are Google, VeriSign and Open DNS, a solutions provider for businesses
and public organizations that need outsourced DNS services.
The technology helps speed up the DNS (Domain Name System) that provides the numeric Internet Protocol (IP)
address needed to get data to an Internet domain such as hightechnewstoday.com.
Called edns-client-subnet in some technical circles, or more ambitiously the "Global Internet Speedup," it uses
geographic information associated with IP addresses to help servers fetching information get it from the closest and
therefore fastest server.
"Anybody using OpenDNS or Google Public DNS will immediately get the benefits of this technology," said OpenDNS
CEO David Ulevitch in an interview. Using it, "the worst-case scenario is that things remain they way they are today,"
and the best-case scenario is that network delays are as low as they can be, he said.
Google proposed the technology in 2010, though Ulevitch said it's been under discussion for longer than that.
And make no mistake-- the search giant has a strong and vested interest in making the Internet a lot faster, including
the use of its own Google Public DNS service, and its Internet operations are big enough that it can use the technology
both when requesting data from other servers and when others request data from its own servers.
And Google endorsed the work as well. "Google is committed to making the Internet quicker, not just for our users,
but for everyone else," said Google engineer Dave Presotto. "We will do that any way we can, by improving protocols,
browsers, client software, and networks."
The tried and true analogy for DNS is that it acts like an old fashioned phone book-- you look up a person's name
and the book provides the phone number. In the case of DNS, when an internet user types in a company name or a search
term, Google's search engine and its DNS servers provides it with an IP address.
Ulevitch likens the new technique to a phone book that gives a bit more information based on part of your own
phone number. Specifically, it uses the first three quarters of an IP address. That's enough to narrow down your
location generally but not pinpoint it.
A server called a DNS resolver, typically operated by an Internet service provider, has the job of finding the IP
address of the server you're trying to reach then providing your computer with the answer.
In the phone book analogy, it's as if you provide the area code and prefix of your phone number, but not the entire data, and the phone book
provides you with the phone number of the restaurant that's close to you rather than across the city.
Proponents of the technology have signed up some partners in the content delivery network (CDN) industry. These
companies specialize in mirroring Web sites or other Internet operations around the world so that a person can get
access to the data without having to request it from a server that's on the wrong side of an ocean.
Participating CDNs are BitGravity, CloudFlare, Comodo, CDNetworks, DNS.com and Edgecast. The two biggest CDNs,
Akamai and Limelight Networks, aren't partners however, though a representative from Akamai did review draft versions
of the technology, but didn't wish to participate at the time.
But Ulevitch thinks that this could soon change. "I do hope to have all of them on board before the year is over,"
Ulevitch said. They'll need to be convinced with the engineering and testing the work is worth it, but Ulevitch says
their services will grow more efficient with the technology, and as more and more ISPs and larger companies use it
on a wider scale.
That's because with this new technology, they can deliver data more smoothly and faster. Even if a user has a
high-speed Internet connection and the server has high data-transfer speeds, that person's computer and a distant
server has a greater lag in each step of their numerous back-and-forth communications. That lag, called latency,
makes a website feel less responsive, and it contributes to problems such as lost data packets, Ulevitch said.
The technology isn't standardized yet, but Google, OpenDNS and a few more advocates hope it will be soon. It's
described in an informational draft at the Internet Engineering Task Force with the name of "Client subnet in DNS
requests," and the effort was spelled out by multiple companies involved in networking technology.
"I'm hoping it will happen within a year," Ulevitch said of standardization. "There have been a couple of false
starts. There were people who were ambivalent about it. But once something gets a significant amount of vendor
adoption in the real world, that ends up speaking volumes to make something a standard."
In other Linux news
It's now official-- Ubuntu 11.10 will ship with both the client and server components of Cloud Foundry, the "platform
cloud" VMware open sourced in late April. The project actually started in January and is on schedule.
Yesterday, VMware and Ubuntu's Canonical both announced that the next version of the Linux distribution due for
official release in October 2011 will include Cloud Foundry packages built by a team of Canonical engineers.
Canonical claims that about 12 million active Ubuntu desktop users, and VMware boasts that with the Cloud Foundry
client on the imminent Oneiric Ocelot, these millions will be only a few commands away from deploying an application
on its existing Cloud Foundry service.
With the Cloud Foundry server deployment tools bundled as well, Ubuntu users will have the ready option of building
their own cloud based on the platform.
Ubuntu is already the core OS behind VMware's service. Akin to Microsoft's Azure or Google's App Engine, Cloud Foundry
is an online service for building, deploying, and readily scaling applications. But unlike Microsoft or Google, VMware has
released the code behind the service, hoping to spawn an army of compatible services.
Still tagged as an initial beta version, the Cloud platform lets you build applications with Java, Ruby on Rails,
the Ruby framework Sinatra, and Node.js, the darling of today's Silicon Valley development world.
VMware's Cloud Foundry service is offered alongside various similar services, providing online access to the MySQL,
MongoDB, and Redis databases as well as the RabbitMQ open source messaging system. VMware's SpringSource arm acquired
Rabbit Technologies in April of 2010.
Yesterday, VMware also announced Cloud Foundry partnerships with Dell, enStratus, and OpsCode, each meant to facilitate
the deployment of the Cloud Foundry platform on local servers.
Dell will release a version of its Crowbar software for installing and configuring Cloud Foundry onto bare-metal
servers. enStratus has updated its cloud management tool to help the deployment of VMware's platform atop the eighteen
"infrastructure clouds" it handles.
And OpsCode will publish the OpsCode Chef "recipes" – sysadmin scripts – that VMware built for deploying Cloud Foundry.
These will also be rolled into the OpsCode service that lets you use the open source Chef platform over the Internet.
Core Ubuntu developer Dustin Kirkland offers instructions for using the Cloud Foundry VMC client package (ruby-vmc)
and command line interface bundled with Oneiric Ocelot, and he provides a separate tutorial for the VCAP server.
Canonical has long offered software for building infrastructure clouds, services that provide readily scalable access
to raw computing resources such as processing power and storage. It first embraced the Eucalyptus platform, then it
dropped Eucalyptus for OpenStack. However, this is Ubuntu's first move into platform clouds, which operate at a higher
level, hiding raw resources.
VMware has reinvented itself as an open source company-- at least that's what the company claims.
Get a great Linux dedicated server for less than $4 a day!
Share on Twitter
All logos, trade marks or service marks on this website are the property of their respective
companies or owners.
Linux News Today.org is read by over 450,000 people involved in the field of Linux application development,
professional Web hosting services, Linux
security, Linux Web development, etc.
Inquire about our reasonable advertising rates
on our news website. One of our advertising representatives will be in touch with you. Simply email us to learn
about our ad rates and how we can help drive relevant traffic to your website. Advertising space is limited.