not idle

CloudExpo Europe 2013, London

February 2nd, 2013 by Aleš Černivec

Tuesday 29th and Wednesday 30th of January, I attended CloudExpo Europe conference in London. This is short overview on the conference from my point of view. I was most interested into talks given in the conference rooms with topics about:

  • Big Data / Mobile & Unified Communications
  • Virtual Infrastructures & Platforms
  • Security & Governance

and of course the key note room.

Some major emphases and relations made on the conference were about devops – efficiency, customer, agility; a lot about OpenStack – moving to big data (even CERN:) ), reorganization of the IT (devops magic with virtual  infrastructures/platforms); and WAFs, i.e. web application firewalls. Chris Kemp from OpenStack gave a great talk on OpenStack. He mainly talked about the history of OpenStack and about its great potential (with a lot of adopters and supporters). Overall observation on the conference was that OpenNebula was mentioned only few times comparing to OpenStack (as they said, EU is lagging behind US when adopting cloud frameworks). Nigel Beighton, VP technology at Rackspace also gave a great presentation on the second day. His words: Europe is 18 months behind the US regarding cloud market.

Big Data and Virtual infrastructure related talks:

Guys from 451Research gave a great presentation on Hadoop and the database of the cloud (yes, not on the cloud). Qlogic says that the future is in the SSD storages, architecture of the providers is changing.They are building controllers for faster storages for hosting cloud services.

Solidfire also builds high-performance SSD storages.  They provide performance-related SLAs. Only with the use of SSDs you can enforce certain performance policies.

Bernino Lindt from CloudSigma gave a talk on big data on public clouds. I guess everybody agrees that public clouds should be forbidden on HDDs (only SSDs should be used). Additionally, scalability and elasticity is always first thing to consider. Some cloud brokers enabling bursting capabilities: Enstratus, Slipstream. 

SNIA was also there. Here you can find their presentations given at the conference. Storage options for Hadoop gives really a great view on proposed architecture of the Hadoop installations. CERN is also concerned about the storage – data aggregation is really not a problem for them, the problem is the storage itself. They produce 200 pbytes of data per year and now they are thinking about cloud bursting to other public cloud providers. They are building Helix Nebula, science cloud, and cloud federation (OpenNebula is also on board).

CERN presents their infrastructure

Security related talks:

Ping identity talked about what is an Identity Bridge and what drives its need. Gartner produced a phrase identity bridge. It solves a multidimensional problem of identity management on complex infrastructures (clouds). One dimension is inbound SSO and workspaces (SSO for applications, for admins, for clients and users). Other dimension are devices (BYOD). Third dimension is domain. Solution is the identity bridge solving inbound and outbound traffic related to ids (SSO solution for Cloud 2.0). In Cloud2.0 user ID is the heart of access control to the application. Do not use firewalls, use access control policies based on ids. IDaaS (identity as a service) is a trend in 2013 (also by Gartner). Trends in SSO across web- and mobile cloud applications were presented.

I have also attended the panel about private vs. public cloud debate. It is described here. Microsoft discovered that IT services are moving from a person centric to technology centric, more automation.

You can find some more details also on Twitter #cloudexpoeu13 and here some pics and vids from the conference.

Project CloudScale

October 4th, 2012 by Jure Polutnik

In October 2012, Xlab have started working on the new project called CloudScale, founded by the European Community’s Framework Programme Seven (FP7). The project will last three years with a total cost of 4.7 million euros. As a commercial partner (SME), Xlab will be responsible for integrating developed tools into common environment and for developing showcases that will be used for dissemination and exploitation purposes.

CloudScale will provide an engineering approach for building scalable cloud applications and services. CloudScale will support Software as a Service (SaaS) and Platform as a Service (PaaS) providers (a) to design their software for scalability and (b) to swiftly identify and gradually solve scalability problems in existing applications. CloudScale will enable the modelling of design alternatives and the analysis of their effect on scalability and cost. Best practices for scalability will further guide the design process. Additionally, CloudScale will provide tools and methods that detect scalability problems by analysing code. Based on the detected problems, CloudScale will offer guidance on the resolution of scalability problems. It answers the ICT Work Programme’s call for achieving massive scalability for software-based services.

The planned validation of project results involves two complementary case studies in the SaaS and the PaaS domain.

CloudScale will leverage European application expertise into the domain of competitive cloud application offerings, both at the SaaS and PaaS level. The engineering approach for scalable applications and services will enable small and medium enterprises as well as large players to fully benefit from the cloud paradigm by building scalable and cost-efficient applications and services based on state-of-the-art cloud technology. Furthermore, the engineering approach reduces risks as well as costs for companies newly entering the cloud market.


Official project website: http://www.cloudscale-project.eu/

Fibonacci, nature and GIMP

September 4th, 2012 by Mariano Cecowski

I’m a fan of Vi Hart‘s mathematical videos, and was particularly blown up by a 3-videos series on Spirals, Fibonacci and being a plant that I strongly suggest (if you can keep up the pace!).

My wife tried to reproduce some of the Vi’s great drawings, but small errors in the angle quickly added up gave poor results. That’s when the programmer in me couldn’t help automatizing it in the computer, and what best that to do it in GIMP?

So, I ended up doing a very simple Python script for GIMP (my first one!) that let’s you create such golden-ratio spiral figures from a single image. For instance, you can use a petal to create a flower, or, as in this example, a leave to create a plant.

Leave

Original

Leaves

Resulting image

You can even create an animation, sending something down a spiral fall (this time the object is not rotated by the golden-angle, but by a smaller one)

Fall

You can get the script from the GIMP’s plugin registry.

Enjoy; an leave any interesting results at the registry’s entry page.

Starting up with upstart

July 3rd, 2012 by Matej Artač

You may not care much for the recent way that Debians handles services: upstart. It seems easy enough to create new services, so here’s a quick overview.

Read the rest of this entry »

Thou shalt partition your Home from its Roots

May 10th, 2012 by Mariano Cecowski

Even though most modern, user-friendly Linux distributions do not do this by default, it is the least to say advisable to create, during installation, a separate partition for the ‘/home’ folder, and another one for the root of the filesystem (‘/’).

What does this mean? Partition are logical segments that correspond to physical parts of your disk drive. In Linux, you can ‘mount’ them in the single filesystem tree (or graph if you wish) that starts at ‘/’ (root). Thus; if you have one single partition and you create a /myfolder directory, and all its contents will be located at that partition. But you could also choose to have a separate partition and mount it in that directory, so that the contents will actually be located in a different partition.

Creating a home partition is actually not that complicated. For instance in Ubuntu, instead of telling it which disk you want to install to (which will remove everything from, and create a single partition) you choose to do it manually, and then create 2 partition; one for the operating system and global applications of something between 20~50 Gb (or 5~10% of your total capacity) which you’ll choose to mount at ‘/’, and another one for user files with the rest of the available space that will be mounted at ‘/home’. (You can choose, for instance, the EXT4 filesystem) Additionally, you might want to create a ‘swap’ type partition of ~4Gb (Historically, of the size of your RAM, not necessarily true any more).

But, why should you bother?

Well, for instance, if you’d like to have your personal data encrypted (yes, Linux might be secure, but if someone has physical access to your computer it won’t help much), which is one option provided by Ubuntu since some time already.

You could also mount your root partition as read only in order to avoid accidental (or otherwise) wipes, overwrites, etc. (be careful though; you’ll need to create some more partition for things that need to be written, such as /var/log)

There are other reason for partitioning, including performance and data sharing, but probably the most important advantage is that of installation independence. That is; no matter how bad did that last update do, or if you’re not been even able to boot after a video driver update: you can always re-install your Linux, or even try something else, and keep not only your personal files intact, but also the configurations for the programs you use.

And that’s what happened to me after updating from Ubuntu 10.4 directly to 12.04: I got the black screen of death, and wasn’t able to fix it even after booting on text mode. Pissed of, I installed Linux Mint Debian Edition on the root partition, formatting it, but leaving the /home partition alone. LMDE didn’t play nice with my darn graphic card either, so I made a fresh install of Ubuntu 12.04.

Apart from having Unity instead of good’ol Gonme 2, my computer felt like nothing happened: Thunderbird showed my mail and calendar the minute I opened it, Firefox had the same bookmarks, and I had the document history as before the crash.
True, I had install some apps that are not in the default installation, but almost all from the repositories: Pidgin (all contacts and history intact), eclipse (last workspace up and running), Opera (it even saved the session because I didn’t close it properly!) and Wine (Microsoft Office running like nothing happened).

True, some programs’ configuration might change between versions, so there could be problems (but only if they don’t properly handle document versifying as they should), and you do have to install all the programs that you had installed in the system and not in your user-space. but it really pays off to have a home partition.



Unsure if you have one? try this:

$ df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sda1 38G 7.3G 29G 21% /
udev 2.9G 4.0K 2.9G 1% /dev
tmpfs 1.2G 996K 1.2G 1% /run
none 5.0M 8.0K 5.0M 1% /run/lock
none 2.9G 352K 2.9G 1% /run/shm
/dev/sda6 258G 215G 31G 88% /home

You don’t have a home partition? No worries; you can move your home, here is a tutorial.

Now that you have your home partition; how about creating an image to back up?

If you are storing into optical format, it might be a good idea to do so before the next solar magnetic storm.

Internationalization with NetBeans

April 6th, 2012 by Marjan Šterk

The NetBeans rich client platform supports very well the i18n of your modules, but we will address two specific problems here.

Localization of Platform Strings

If your have the misfortune that language is not supported in the platform by default, you also have to translate any platform strings visible from your application. For development, a good option is to:

  1. Create a new cluster and set its branding to e.g. sl for Slovenian.
  2. Right-click it, select Branding, go to Resource Bundles tab, and search for each English string that appears in your application. If you find it, it is part of the platform and you can translate it right there.
  3. Make your application dependant on this new cluster, re-build, run. Note that the locale must be sl right from the start in order for your branding to work as replacement for locale.

To deploy:

  1. Remove dependency of your application on the sl-branded cluster.
  2. Build your sl-branded cluster.
  3. Copy the contents of build/cluster/core/locale to the core/locale folder of your NetBeans platform, and similar for the build/cluster/modules/locale folder. Now your platform sort-of supports your language.
  4. Re-build your application (it will be re-built with your modified platform) and re-package it however you want.

Note that it is also possible to include a copy of the platform into your project’s folder tree, so that all developers build against the same copy of the platform. If your project is set up like that then the platform modifications you made will be visible to all the developers as soon as they update their working copies.

Localizing the Splash Screen

This is something that is AFAIK not really supported, so we will use a slight hack. The default splash image location is branding/core/core.jar/org/netbeans/core/startup/splash.gif – let us suppose you have your English splash screen there. First, save you localized splash into the same folder, e.g. as splash_sl.gif for Slovene. If you now build and package the application as ZIP distribution, the localized splash will be there (in the folder app.name/app.name/core/locale/core_app.name.jar), but it will not be used because during building it was renamed to splash_sl_app.name.gif where it should be splash_app.name_sl.gif. You can manually re-name it in the created distribution, or modify your build script to do it automatically, and voila, the localized splash will be used whenever the application is started with the -J-Duser.language=sl option!

My capybaras are too many

February 24th, 2012 by Matej Artač

Cucumber and Capybara, exploiting the Selenium driver, is a powerful combination for running automated tests of the web GUI. Most of the tutorials out there show how simple it is to create a session and use it for testing AJAX-based web sites.

What they fail to show is that in a serious test, there are going to be tens or hundreds of scenarios run in a single sweep. That wouldn’t have been a problem in itself. However, each scenario step definitions start with a new Selenium session. Each one opens a new FireFox browser. And none of the scenario closes the browser when it is done.

How do I prevent the Capybaras from multiplying, crashing my desktop?
Read the rest of this entry »

Importing your Python modules into Ruby

January 8th, 2012 by Matej Artač

I am considering using Cucumber for testing a project we have in Python. Since Cucumber uses Ruby for doing the actual work, the question is how to bridge with Python.

There are, of course, several options, like calling the Python code as a shell script from Ruby. Or maybe XML-RPC? To me, however, the most convenient way would be to do at least basic calls to Python methods from Ruby.

For this, RubyPython comes to the rescue. Building this bridge is not all that straightforward, though.
Read the rest of this entry »

Separating music playlists with Python

December 29th, 2011 by Marko Kuder

Do you spend a lot of time organizing and cleaning up your music? These python scripts may help.
Read the rest of this entry »

Testing old web pages with new tools

December 5th, 2011 by Matej Artač

Recently I discovered a great tool for doing acceptance and unit tests of the web interfaces named Capybara. It works nicely as a back-end to Cucumber / Gherkin, but it is possible, if need be, to be used stand-alone. My goal was to validate an old page we’d created using ASPX.Net  One thing, though, some folks might see as a draw-back: it’s a Ruby tool.

Read the rest of this entry »

Private