Speaking about openATTIC at the Ceph Days in Munich (2016-09-23)


Ceph Days are full-day events from and for the Ceph community which take place around the globe. They usually provide a good variety of talks, including technical deep-dives, best practices and updates about recent developments.

The next Ceph Day will take place in Munich, Germany next week (Friday, 23rd of September). I'll be there, to give an overview and update about openATTIC, particularly on the current Ceph management and monitoring feature set as well as an outlook on ongoing and upcoming developments.

If you're using Ceph and would like to get updates on recent development "straight from the horse's mouth", next week is your chance! I look forward to being there.

Sneak Preview: Ceph Pool Performance Graphs

As I wrote in my call for feedback and testing of the Ceph management features in openATTIC 2.0.14, we still have a lot of tasks on our plate.

Currently, we're laying the groundwork for consuming SUSE's collection of Salt files for deploying, managing and automating Ceph. Dubbed the "DeepSea" project, this framework will form the foundation of how we plan to extend the Ceph management capabilities of openATTIC to deploy and orchestrate tasks on remote Ceph nodes.

In parallel, we are currently working on extending the openATTIC WebUI on making the existing backend functionality accessible and usable. Next up is displaying the performance statistics for Ceph pools that we already collect in the backend (OP-1405).

To whet your appetite, here's a screen shot of the ongoing development:


Keep in mind this is work in progress. What do you think?

Seeking your feedback on the Ceph monitoring and management functionality in openATTIC

With the release of openATTIC version 2.0.14 this week, we have reached an important milestone when it comes to the Ceph management and monitoring capabilities. It is now possible to monitor and view the health and overall performance of one or multiple Ceph clusters via the newly designed Ceph cluster dashboard.

In addition to that, openATTIC now offers many options to view, create or delete various Ceph objects like Pools, RDBs and OSDs.

We're well aware that we're not done yet. But even though we still have a lot of additional Ceph management features on our TODO list, we'd like to make sure that we're on the right track with what we have so far.

Therefore we are seeking feedback from early adopters and would like to encourage you to give openATTIC a try! If you are running a Ceph cluster in your environment, you could now start using openATTIC to monitor its status and perform basic administrative tasks.

All it requires is a Ceph admin key and config file. The installation of openATTIC for Ceph monitoring/management purposes is pretty lightweight, and you don't need any additional disks if you're not interested in the other storage management capabilities we provide.

We'd like to solicit your input on the following topics:

  • How do you like the existing functionality?
  • Did you find any bugs?
  • What can be improved?
  • What is missing?
  • What would be the next features we should look into?

Any feedback is welcome, either via our Google Group, IRC or our public Jira tracker. See the get involved page for details on how to get in touch with us.

Thanks in advance for your help and support!

openATTIC 2.0.14 beta has been released

Despite the summer holidays, the openATTIC development team has been busy, adding new functionality and improving existing features. This release also includes code contributions created by developers not employed by it-novum, and we're very grateful for the support!

Noteworthy new features include a first implementation of a Ceph Cluster monitoring dashboard that supports displaying health and performance data of multiple Ceph clusters.

The openATTIC WebUI now supports multiple dashboards with custom widget configurations (e.g. title, position and size). The dashboard configuration is saved in the user's profile and will be restored upon the next login.

See the screenshots below for a preview:

This release also adds extended Ceph pool management support: in addition to viewing existing pools, it's now possible to also create and delete Ceph pools via the WebUI, including support for both replicated and erasure-coded pools.

Read more…

KVM guest with acpid installed will not shutdown

Yesterday a colleague migrated a physical machine into a kvm vm. Afterwards we wanted to manage the vm within virt-manager.

Acpid was installed, but nothing happend if we tried to shutdown or reboot the vm via acpi requests.

The problem was, that the migrated kvm vm still tought that it is a hardware instead of a vm. Therefore I changed the entry within the acpi events.

  • Edit /etc/acpi/events/powerbtn to contain action=/sbin/poweroff

As an alternative you could purge and reinstall the acpid package.

  • apt-get purge acpid
  • apt-get install acpid

Sometimes it's that easy :-)

Video and Slides of the openATTIC Overview Talk at FrOSCon 2016

About a week ago, the openATTIC team attended the annual Free and Open Source Conference (FrOSCon) in St. Augustin, Germany.

We had a booth in the exhibition area and we also gave an Overview talk about openATTIC 2.0 (in German), highlighting the latest changes and features as well as an outlook into future development plans.

The slides of this talk have now been uploaded to SlideShare, and a video recording of the presentation is available on YouTube and C3TV - enjoy!

Video (YouTube):



The State of Ceph Support in openATTIC (August 2016)

In May, I posted an update on the state of the Ceph support in openATTIC.

Since then, we released openATTIC 2.0.12 and 2.0.13 and are currently working on the next release, 2.0.14.

With each release, we have added more Ceph management and monitoring functionality or refined existing features.

In this post, I'd like to summarize these changes as well as giving an update on what we're currently working on.

Read more…

One Year in the openATTIC Team: A Summary

A month ago, I concluded my first year in the openATTIC team. How time flies when you're having fun!

One of my goals early on was to make openATTIC more open and accessible for community contributors and early adopters. There were a few barriers that had to be removed for this to happen.

In this post, I'd like to recapitulate some noteworthy changes that took place in the openATTIC project during the last 12 months. I also would like to summarize some highlights and achievements.

I realized that I speak about many of these when I give presentations or talk with people about openATTIC, but I think it makes sense to also put them in writing in this blog.

Read more…

openATTIC 2.0.13 beta has been released

openATTIC 2.0.13 beta has been released

We're happy to announce the availability of openATTIC version 2.0.13!

In this release, we have made further improvements regarding the Ceph RBD handling in our user interface. We cleaned up many Ceph related detail-information tabs in order to display only useful data, especially on the Ceph RBD Page. We've also made some usability improvements on our dashboard - unfinished wizard steps have been removed and share paths will be set automatically at the end of the wizard instead of bothering the user with that. We also added a new dialog for creating new RBDs and integrated the functionality to delete RBDs within the UI. See our Sneak preview of additional Ceph RBD management functions for details.

The Nagios monitoring has been improved by adding performance data for Ceph pools. Also, we are constantly tracking the responsiveness of a Ceph cluster now.

In our Ceph-related backend part, we made some performance improvements, by running only the commands which are actually used by the REST API.

You may want to make/store the REST-API accessible on another host. In 2.0.13 it's possible to configure the url of the API globally. So, you don't need to customize every service which calls the api.

For those of you, who want to use openATTIC on a preconfigured VM, we're now providing VMs for KVM and VirtualBox, which can be found at apt.openattic.org/vms/.

If you already run the check_cephcluster Nagios plugin, you might receive the following error in your PNP4Nagios log (/var/log/pnp4nagios/perfdata.log):

2016-07-22 14:15:14 [22939] [0] RRDs::update ERROR <path to the RRD file>: found extra data on update argument: 13.67

This is because of the new parameter exec_time. In that case you will have to remove all existing RRD and XML files of your Ceph clusters. This can be done by running:

rm /var/lib/pnp4nagios/perfdata/<host_name>/Check_CephCluster_*

Note, After removing these files all collected performance data for your Ceph clusters so far are gone!

Read more…