Speaking about openATTIC at FOSDEM 2017


On February 4th and 5th, the annual FOSDEM conference will take place in Brussels, Belgium.

This year, I'll give a talk titled Ceph and Storage management with openATTIC in which I'd like to give an overview and update about our project.

The talk will be a session in the the Software Defined Storage developer room that is scheduled to take place on Sunday, 5th. My talk takes place at 13:30.

FOSDEM is a very popular and intensive conference - I look forward to attending it again!

openATTIC 2.0.17 beta has been released

Shortly before the holidays, we're happy to announce the availability of openATTIC version 2.0.17!

Due to the onboarding of the openATTIC team to SUSE, this release took a bit longer than the usual cycle. But we hope it was worth waiting for!

As usual, we included a good mix of bug fixes, improvements and some new functionality. Some highlights in this version include:

  • Lots of improvements for installing and running openATTIC on Ubuntu Linux 16.04 aka "Xenial Xerus". openATTIC on Xenial now passes all tests and the installation should be fairly straightforward. We're still interested in your feedback, though - please let us know if you still find any issues installing and running openATTIC on this platform. An installation how-to is available in the last Release announcement, an update of the installation documentation will also be provided.
  • In the Ceph backend, we moved calls to librados into separate processes, to prevent potential blocking of the Django application and Web UI in case that RADOS calls get stuck.
  • We've replaced the previous systemd DBUS calls with calls to systemctl and now use systemd for starting/stopping/reloading services on all platforms where systemd is available (previously, openATTIC was still using the "old" SysV init tools, e.g. service). We also switched to using reload-or-restart for reloading services by default.
  • The Web UI received a number of refinements and improvements, e.g. some API-Recorder fixes and usability enhancements. Now, it's also possible to obtain a user's authentication token via the web interface, which helps to avoid using passwords in external scripts or applications that want to access the openATTIC REST API.
  • During the initial installation on RPM-based systems, the openattic PostgreSQL database and user account are now created using a random password via oaconfig install.
  • Improvements for setting up development environments using Vagrant.
  • We also added a new chapter to the Documentation that describes how to set up a multi-node configuration.

Read more…

openATTIC Wins a Silver OSBAR Award 2016


Since 2014, OSBAR, the innovation award of the German Open Source Business Alliance (OSB Alliance), highlights open source projects that add real benefit to the IT-world.

Submissions are assessed based on originality, innovation, practical relevance and maturity by a committee of six well-known German IT- and open source experts.

In total, 20 open source projects applied for this year's awards and one of them was our open source Ceph and storage management framework.

Read more…

Status of openATTIC on Ubuntu Xenial

We have been working on porting openATTIC to Ubuntu 16.04 LTS "Xenial Xerus" for quite some time now, and we wanted to give you a quick update on the current status as of openATTIC version 2.0.16.

It turns out that Xenial provides a number of challenges and differences that we needed to take into account, for example a new version of Django and the Django REST framework, as well as some additional underlying changes.

Making all the required changes in a backwards-compatible manner and testing them is quite time-intensive.

In a nutshell, we're not quite there yet, but we're making progress.

Some of these issues can be worked around, but the overall "out of the box experience" still needs to be further improved.

Read more…

We're hiring: Senior Frontend Developer

Now that we've joined SUSE, we're able to extend the team working on openATTIC!

We've just opened a new position and are now hiring for the position of a "Senior Frontend Developer Enterprise Storage Management".

In this role, you'll be working with the openATTIC team on adding new features to openATTIC's web-based management frontend, as well as improving and extending existing functionality.

The openATTIC web interface is based on well-known web development technologies like AngularJS and Bootstrap and communicates with the openATTIC backend via it's REST API.

See the job opening for further details on our expectations and requirements. If you have any questions or would like to learn more, don't hesitate to get in touch with us!

By the way, if you're interested in working on open source software, SUSE currently has 70+ job offerings available!

openATTIC 2.0.16 beta has been released

We're happy to announce the availability of openATTIC version 2.0.16!

Following our mantra "release early, release often", we have published a release four weeks after the release of 2.0.15. One of the highlights in this version is the migration support for openATTIC instances still running on Django 1.6!

Moreover, the openATTIC REST API does now report all installed packages as well as the currently installed openATTIC version. Most other changes are bug fixes and improvements of openATTIC. We have also continued working on supporting Ubuntu 16.04 LTS "Xenial Xerus".

Read more…

openATTIC joins SUSE


You may have seen today's announcement, that the openATTIC development team has joined SUSE, and with this SUSE has taken over the corporate sponsor role from openATTIC's parent company, it-novum.

I'd like to share my view about what this means for openATTIC and the community and ecosystem around the project.

First off, the license of the software or openness of the development process won't change. Quite the contrary: SUSE is fully committed to keeping openATTIC licensed under the GPL and growing the community around the project.

You will still be able to freely use it without arbitrary restrictions for your Ceph and "traditional" storage management needs.

Read more…

Automatically deploying Ceph using Salt Open and DeepSea

One key part of implementing Ceph management capabilities within openATTIC revolves around the possibilities to install, deploy and manage Ceph cluster nodes in an automatic fashion. This requires remote node management capabilities, that openATTIC currently does not provide out of the box. For "traditional" storage configurations, openATTIC needs to be installed on any storage node that is managed, but you can use a single web interface for managing all of the node's storage resources.

Naturally, installing openATTIC on all nodes belonging to a Ceph cluster is not feasible.

As I mentioned in my post Sneak Preview: Ceph Pool Performance Graphs, SUSE is developing a collection of Salt files for deploying, managing and automating Ceph that openATTIC will build on.

The DeepSea Documentation on github is a good start, but sometimes it's helpful to get a simple step-by-step guide on how to get started.

Thankfully, SUSE's Tim Serong has written up a nice article that guides you through the various steps and stages involved in installing Ceph with DeepSea: Hello Salty Goodness.

Hope you enjoy it!

Reduce KVM disk size with dd and sparsify

You can convert a raw or qcow2 non-sparse image to a sparse image with dd and sparsify. Or you can reduce the size of an existing image again.

Install the libguestfs-tools on your system

apt-get install libguestfs-tools

Now copy your existing image to a new one with dd

dd if=existing_imagefile.raw of=new_imagefile.raw conv=sparse

Afterwards use virt-sparsify to reduce your disk size again (in this example I sparsed and converted the image in just one step)

virt-sparsify new_imagefile.raw --convert qcow2 new_imagefile.qcow2

In my case I converted a block device with 65GB with dd sparse to 40GB raw image and afterwards I used virt-sparsify to reduce the size down to 6.8GB.

Developing with Ceph using Docker

As you're probably aware, we're putting a lot of effort into improving the Ceph management and monitoring capabilities of openATTIC in collaboration with SUSE.

One of the challenges here is that Ceph is a distributed system, usually running on a number of independent nodes/hosts. This can be somewhat of a challenge for a developer who just wants to "talk" to a Ceph cluster without actually having to fully set up and manage it.

Of course, you could be using tools like SUSE's Salt-based DeepSea project or ceph-ansible, which automate the deployment and configuration of an entire Ceph cluster to a high degree. But that still requires setting up multiple (virtual) machines, which could be a daunting or at least resource-intensive task for a developer.

While we do have a number of internal Ceph clusters in our data center that we can use for testing and development purposes, sometimes it's sufficient to have something that behaves like a Ceph cluster from an API perspective, but must not necessarily perform like a full-blown distributed system (and can be set up locally).

Fortunately, Docker comes to the rescue here - the nice folks at Ceph kindly provide a special Docker image labeled ceph/demo, which can be described as a "Ceph cluster in a box".

Read more…