Wednesday, June 10, 2015

Fedora Activity Day: Release Engineering 2015

This past weekend a number of members of the Fedora Community who spend time focused on Release Engineering came together for a Fedora Activity Day (FAD) to work on some items of interest that will fix current issues being faced as well as work towards solutions for how to handle future challenges such as how to deliver the Fedora.Next Rings concept in a clean, well defined, standard reproduce-able manner via build systems, tooling, and processes. One of the big items that I'm personally very excited about is Fedora Atomic (which is Fedora's implementation of Project Atomic).

A big ticket item for Fedora Release Engineering right now is Pungi Version 4 which is a complete rewrite from Pungi Version 3.x and it will enable a large array of new functionality for composes that would have been quite difficult to implement in the older version. Of those items are things such as Koji integration, enabling nightly Rawhide composes to match a normal Test Candidate or Release build, and enabling all outlets of Fedora Atomic including ISO installer  and pxe-to-live nightly builds. Since this is will enable more rapid iteration on the Fedora Atomic composes and I'm very enthusiastic about that project, this is where I spent a large amount of my time over the weekend.

A quick side note, I was fortunate enough to attend this event *and* I was also fortunate enough to attend a session about Fedora Hubs the day before the Activity Day which was amazing and I can't wait for the project to come to fruition, but I won't go too far into that because this particular post is about the FAD.

Friday 2015-06-05
We kicked off the Release Engineering FAD, the first few hours are to go back through proposed deliverables to bring everyone up to speed on all the issues, their background, the motivations to resolve, and so that we can discuss and debate what items will provide the most "bang for buck" outcome in terms of what we work on while we're fortunate enough to all be sitting in a room together over the weekend. We took a scrum-style approach, lead by Ian McLeod, to scoring work tasks for how long we thought the work would take over all. Once this was done, we broke out into subgroups to divide and conquer tasks. I joined Jon Disnard in a break out session to work on Pungi4 which is something that will enable a lot of other items in the deliverables list for the FAD. As mentioned before, Pungi version 4.x is a complete rewrite from Pungi 3.x and beyond what was previously mentioned it has taken compose times from 8+ hours down to roughly an hour in most cases. The work on Pungi4 was unfortunately quite a bit more than we had originally thought, there were APIs that it depended upon that were changed out from under it, there were a number of namespace bugs, and there was a considerable amount of code that we were able to remove once we realized that the data that code was parsing and producing was already held in productmd which is a new dependency of Pungi in version 4 over version 3. This work continued late into the evening and on into the next day.

Saturday 2015-06-06
All Fedora contributors met up and did a short status check-in based on progress made and/or work completed from the previous day so that we could sort out what to work on next, re-prioritize if necessary based on if things had become blocked or not. Once this was complete, everyone broke out into groups again to continue getting work done. On this day I worked with a few more people than before including Dan Mach (the original author of the recently rewritten Pungi version 4) which as a massive help in terms of institutional knowledge. The end result from this group of collaboration was a functional pungi4 for the workflows we were testing at the time (I leave this open ended mostly because there are a *lot* of possible workflows in Pungi4 and we've not been able to test them all yet). As a result, we filed a pull request with upstream Pungi and continued to work on other items. However, one thing stuck out here is that iterating on Pungi was unnecessarily painful because you had to have specific access to sets of RPMs in order to perform compose workflows. This was very time consuming and did not bode well for rapid development, we started a discussion of how to improve this but that bled into the evening. We also ran into an interesting bug where createrepo_c will fail but report success, this is still something we're attempting to track down but reverted to createrepo in the interest of time. Interestingly enough we've run into a new bug here where the createrepo pathing is always prepended with a bindmounted path, but it didn't stop us and it's on the list of items to be resolved.

Sunday 2015-06-07
Once again, we all met and assessed the current status of ongoing work. The good news is that a lot of work had been completed at this point but the bad news is that some items ended up blocked and we had to resolve those blockers before we could move on so some of the estimates we made on Friday were off. However, we documented all of this so there was a solid plan of action to move forward with. At this point we broke out into groups again and I joined in with the group that started to discuss next-gen tooling. We diverged from writing code for a while so that we could make sure not to run out of time before discussing these next generation tools as per the agenda. Koji2 was discussed at length and some initial design proposals should be coming down the pipeline this week on the koji-devel mailing list (the core developers committed to getting that done soon). We started defining requirements for ComposeDB (project name pending, we don't feel this is really the best naming), a mailing list thread with initial discussion results and a write up of the end result will follow. A few of us also re-converged on the topic of slow iteration for Pungi4 development and the end result was the ability to rapidly run tests out of a git clone/checkout. With this pull request, developers can run tests in roughly 20 seconds (after the inital setup to create the "dummy data", which takes a minute or two) which is a massive improvement over the hour-long test runs we were having against the real Fedora rpm sets.

This has been mostly an account of things I was directly involved in, there was a lot of work that got done over the course of the FAD, I couldn't even remotely pretend to be able to keep track of it all. That being said, many folks did a wonderful job of keeping tabs on their specific areas of work as well as the results of the daily check-ins, this was captured in an Etherpad here. Different members of the FAD volunteered to summarize and sent out status updates, summaries, and plans of action for continued work to appropriate outlets for the different areas they worked on. All in all I think it was a successful FAD, though I lack some perspective as this was the first FAD I've ever been to. Others who have participated in FADs before had positive things to say so I feel good about it.

Next, I plan to follow up and make sure to get Fedora Atomic running in Pungi4 as soon as we possibly can so that the nightly builds allow the project to iterate more rapidly. From there I hope to work on both ComposeDB and Koji 2.0.

Here's to continuing to get work done in Fedora land!

Happy hacking,
-AdamM

Wednesday, January 29, 2014

Running your own docker registry on Fedora or RHEL/CentOS 6

This isn't going to be a very wordy post, it's just the process I used to setup a local docker registry for testing purposes. This can be done with either Fedora or RHEL/CentOS 6 with EPEL. I'm mostly just writing this process down because I had to look up the info from more than one location and figured I should write it down in one place so I remember next time I try to do it and hopefully someone else might find it useful.

Before we start, if you're running RHEL or CentOS 6 you're going to need EPEL6 installed from here.

First, install the packages:

yum -y install docker-io docker-registry

Next we need to start up the services (I know I'm not doing the native systemd/systemctl commands here but this way it works on both Fedora and RHEL/CentOS so I went that route)

service docker start
service docker-registry start
service redis start

You can chkconfig on or systemctl enable them if you so choose and they will start persistently on reboots

Next up, just as an example lets go ahead and pull a docker image. (Note: you either need to do this as root or as an user that's in been added to the docker group)

# docker pull centos
Pulling repository centos
539c0211cd76: Download complete

docker images
REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE
centos 6.4 539c0211cd76 10 months ago 300.6 MB
centos latest 539c0211cd76 10 months ago 300.6 MB


Now we can run a centos image as a container

# docker run -t -i centos /bin/bash
bash-4.1#

You are able to disconnect from it but still leave it running with Ctrl-p+Ctrl-q which you will then see it in the running docker list.

# docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS
NAMES
ab0e4ba814ab centos:6.4 /bin/bash 50 minutes ago Exit 0
angry_euclide

Next up we need to commit this with our registry (this would potentially be an image you made changes to from the base image, or otherwise).

# docker commit ab0e4ba814ab localhost.localdomain:5000/centos_local
6c82b393337351db8c63b807efc6700934eecc364357a26a472a899f63d4fc09
# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS
NAMES
ab0e4ba814ab centos:6.4 /bin/bash About an hour ago Up 25 minutes
angry_euclide


Once that it done we can push to our registry.

# docker push localhost.localdomain:5000/centos_local
The push refers to a repository [localhost.localdomain:5000/centos_local] (len: 1)
Sending image list
Pushing repository localhost.localdomain:5000/centos_local (1 tags)
539c0211cd76: Pushing [=================================================> ] 310.8 MB/310.9 MB 0
6c82b3933373: Pushing [=================================================> ] 288.1 MB/288.2 MB 0
Pushing tags for rev [6c82b3933373] on {http://localhost.localdomain:5000/v1/repositories/centos_local/tags/lates
t}


Alternatively we can build and tag from a Dockerfile.

(Because I can't figure out how to make blogger show a heredoc properly I'm just using an echo with a redirect ... it works so I'm moving on)

# echo 'FROM centos
MAINTAINER "Adam Miller"

RUN yum -y update
RUN yum -y install httpd
EXPOSE 80

CMD /usr/sbin/apachectl -D FOREGROUND' > Dockerfile


# docker build -t localhost:5000/centos_httpd .
Uploading context 51.2 kB
Step 1 : FROM centos

---> 539c0211cd76
Step 2 : MAINTAINER "Adam Miller"
---> Running in 7fde3245be29
---> 89f2c637957f
Step 3 : RUN yum -y update
---> Running in b6c6bd22fcb5
Loaded plugins: fastestmirror
Setting up Update Process
Resolving Dependencies
--> Running transaction check


************************************************************************

******** NOTE: Lots of yum output omitted here for brevity *************
************************************************************************

Transaction Summary
======================================================================
Install 6 Package(s)
Upgrade 57 Package(s)

Total download size: 50 M

Complete!
---> dc4fad6ccf28
Step 4 : RUN yum -y install httpd
---> Running in 2bc296aed371
************************************************************************

********** NOTE: Some yum output omitted here for brevity **************
************************************************************************

================================================================================
Package Arch Version Repository Size
================================================================================
Installing:
httpd x86_64 2.2.15-29.el6.centos base 821 k
Installing for dependencies:
apr x86_64 1.3.9-5.el6_2 base 123 k
apr-util x86_64 1.3.9-3.el6_0.1 base 87 k
apr-util-ldap x86_64 1.3.9-3.el6_0.1 base 15 k
httpd-tools x86_64 2.2.15-29.el6.centos base 73 k
mailcap noarch 2.1.31-2.el6 base 27 k
redhat-logos noarch 60.0.14-12.el6.centos base 15 M

Transaction Summary
================================================================================
Install 7 Package(s)

Total download size: 16 M
Installed size: 19 M


Complete!
---> d1fcb707794d
Step 5 : EXPOSE 80
---> Running in 3e5baa8bf52f
---> 50f9343d8a7d
Step 6 : CMD /usr/sbin/apachectl -D FOREGROUND
---> Running in 916de09d72bb
---> 0d908165e418
Successfully built 0d908165e418

# docker images
REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE
localhost:5000/centos_httpd latest 0d908165e418 11 minutes ago 628.8 MB
localhost.localdomain:5000/centos_local latest 6c82b3933373 2 hours ago 594.5 MB
centos 6.4 539c0211cd76 10 months ago 300.6 MB
centos latest 539c0211cd76 10 months ago 300.6 MB

# docker tag 0d908165e418 localhost:5000/centos_httpd
# docker push localhost:5000/centos_httpd
The push refers to a repository [localhost:5000/centos_httpd] (len: 1)
Sending image list
Pushing repository localhost:5000/centos_httpd (1 tags)
539c0211cd76: Image already pushed, skipping
89f2c637957f: Pushing [=====> ] 1.024 kB/10.24 kB 5s
dc4fad6ccf28: Pushing [=================================================> ] 288.2 MB/288.3 MB 0
d1fcb707794d: Pushing [=================================================> ] 34.76 MB/34.96 MB 0
50f9343d8a7d: Pushing [=====> ] 1.024 kB/10.24 kB 1s
0d908165e418: Pushing [=====> ] 1.024 kB/10.24 kB 1s
Pushing tags for rev [0d908165e418] on {http://localhost:5000/v1/repositories/centos_httpd/tags/latest}


Now we can pull one of the images we've created from the registry.
# docker pull localhost:5000/centos_httpd
Pulling repository localhost:5000/centos_httpd
89f2c637957f: Download complete
dc4fad6ccf28: Download complete
d1fcb707794d: Download complete
0d908165e418: Download complete
50f9343d8a7d: Download complete
539c0211cd76: Download complete


That's about it, this of course is just setup for use on the localhost mostly just as an example. The docker registry can be configured from /etc/docker-registry.yml

For more information, here's a list of resources about docker registry and index:
http://blog.thoward37.me/articles/where-are-docker-images-stored/
http://docs.docker.io/en/latest/api/registry_index_spec/
http://blog.docker.io/2013/07/how-to-use-your-own-registry/
http://kencochrane.net/blog/2013/08/the-docker-guidebook/#part-6-using-a-private-registry
https://github.com/dotcloud/docker-registry

Tuesday, January 14, 2014

Book Review: Ansible Configuration Management

Ansible Configuration Management [0]



TL;DR - Buy the book, it's good.

    This book is a great resource for any Linux administrator currently looking for a really well written, brisk paced walk through of Ansible[1][2]. I don't want to call this book an introduction to Ansible because there is a lot of Ansible coverage packed into these 92 pages, but at 92 pages I suspect most would expect it to be light on the goods but that is far from the truth. This book does a great job packing plenty of information into a small package.

    Ansible Configuration Management starts off describing what all will be covered, the tools you need to make real use of the text, who this book is for, as well as the standard items you would find in the Preface of a book such as typographic conventions and the like.

    Chapter 1 - Here the author kicks off with coverage of various installation methods including distribution specifics going through package managers, from pip, and coverage of how to install from the Ansible source code. From there we go into setting things up and an introductory example just to get your feet wet. Here is where I think my favorite part of Chapter 1 happens, the author covers ansible-doc which is something I feel is an extremely useful component of Ansible and I'm glad the author brought this up so early in the book to highlight reference material before diving in too far.

    Chapter 2 - The author takes us through the paces of what is known as a Playbook in Ansible vocabulary which is how you group sets of tasks together to be reusable. I really like the approach that is taken, each section of the Playbook is broken down with an explanation along with discussion of what makes different aspects useful in actual use cases. Then we are taken through some of the basic staples in Ansible space in the form of modules. The modules covered here are what I would consider "task modifiers" for lack of a better term, these allow for modifying tasks behavior based on conditions we set on the task or just simply because the we wanted to mix it up. Again I feel the author does a good job tying the content back to real world examples that many admins experience the need to solve.

    Chapter 3 - In this chapter we build on top of the material covered in Chapter 2 by covering more advanced topics in the realm of playbooks such as looping, conditional execution, task delegation, inventory variables, environment variables, external data lookup, storing results, and debugging playbooks and more. Once again, I'm going to sound like a broken record but I feel like the way the author doesn't just go on an academic discussion of each topic but actually ties it into an actual use case or administration task to demonstrate how the specific feature can solve a problem for you which is again beneficial. The discussion here is solid and was an enjoyable read.

    Chapter 4 - This is where the author wraps all the previous topics together in a chapter titled "Larger Projects." This is something I'm a big fan of and one of the reasons I think that even in the short length of the book the author does a great job of breaking past the realm of introductory topics. Here we are brought through how to handle large projects of Ansible playbooks to manage complex infrastructure. This chapter walks through Includes, Task Includes, Handler Includes, and Playbook Includes. Now on to one of my favorite features of Ansible: Roles. Our author takes us through what and Ansible Role is, including some interesting notes on parsing precedence, and how to make use of them. One thing I had mixed feelings on here in this chapter is the coverage of "New Features in Ansible 1.3" as I worry this will show the books age quickly with the release cadence that the Ansible project maintains. However, the coverage in that section as well as the rest of the book's text will remain useful I'm sure for some time to come as Ansible is continuing to add features but not break compatibility as newer versions roll out. Next our author discusses ways to increase speed of Ansible runs using different techniques based on requirements and use cases as well as covering Ansible's pull mode which is something to take note of. Ansible pull mode is often considered "backwards" in Ansible lore as Ansible is primarily a "push mode" system but some SysAdmins/Ops folk still prefer the pull mode and therefore Ansible provides the functionality and our author takes some time to cover how to utilize it.

    Chapter 5 - Custom Modules, here our author takes some time to discuss what an Ansible module is in terms of implementation, then shows how to write a simple module in the bash shell scripting language. Moving on our author shows how to write an Ansible module in Python which is what I would consider to be "native" to Ansible as all modules that are to be accepted into Ansible core must be written in Python. There is good discussion here about the integration points of the modules into the Ansible system as well as how data is passed, how debugging information is handled, and much more.

All said and done I would recommend this book to anyone interested in Ansible and would like a well written guide to walk them through from zero to being useful configuring and deploying infrastructure services using Ansible as well as writing custom modules.

Hope this helps someone out there.

Happy hacking.

-AdamM

[0] - http://www.packtpub.com/ansible-configuration-management/book
[1] - https://github.com/ansible/ansible
[2] - http://www.ansibleworks.com/

Disclaimer:
    I was approached by PacktPub to review this book, I was given a free copy in exchange for doing the review. I did however really enjoy the book and as a side effect I purchased a copy to support the author for their work.

Thursday, December 12, 2013

So, I wrote a book.

I wrote a book about OpenShift. I talked about it a little on twitter but didn't want to try and over sell it because I feel a little strange about too much self promotion. However, my publisher would really like to get some third party to feedback on it and requested that I post this so if you like reading tech books and writing your thoughts about it my publisher and I would appreciate the feedback.
Thanks!


Packt Publishing are offering free copies of Implementing OpenShift : http://bit.ly/HUa7Be  in exchange for a review either on your blog or on the title’s Amazon page.
Here’s the blurb:
  • Learn more about the cloud, it's different service models, and what each one means to their target audiences
  • Master the use of OpenShift Online through the command line, web interface, and IDE integrations
  • Understand the OpenShift architecture, breaking into how the open source Platform-as-a-Service works internally
  • Deploy an OpenShift Origin-based Platform-as-a-Service in your own environment using DevOps automation tools
Software Developers and DevOps who are interested in learning how to use the OpenShift Platform-as-a-Service for developing and deploying applications and doing much more with it, this is a good way to bag yourself a free guide (current retail price $12.74).
Limited number of Free review copies are available until 20th December 2013
If you’re interested, email Harleen Bagga at: harleenb@packtpub.com.

Tuesday, May 22, 2012

Announcing OpenShift Origin Nightly Fedora 16 RPMs

Hello all,

    I am pleased to announce the immediate availability of nightly builds of the OpenShift Origin package set in RPM form, built for Fedora 16 available to anyone who might be interested in them here.

    This repository should be utilized along with the instructions outlined in the "Build your own PaaS"  article. The nightly repository will not take the place of any of those outlined in the guide but will instead supplement what is there and offer a new development snapshot of all the packages who's source code originates in the OpenShift Origin github repositories.

    A little back story on this, the OpenShift Team at Red Hat has been looking for more ways to provide the community with as many opportunities to consume the on going development snapshots of the upstream code hosted on github. We initially launched OpenShift Origin's components Crankcase and OS-Client tools on github which is something I've been extremely excited about and along with that launch was a considerable amount of documentation for getting started and involved with OpenShift Origin which I would highly encourage everyone take some time to go check out, it's certainly exciting stuff!

    Now, with the advent of the nightly rpm builds my hope is that this will be the first of many steps towards the goal of delivering the code in a more consumable manner so that end users who are wanting to get their hands dirty early on are able to do so without having to build everything from scratch. Another desire as a side effect of this is that those community members who do consume the nightly builds will find the project as exciting as those of us already involved and will hopefully be motived to join and contribute! I'm also hoping that we will be able to get to a point where we can also offer nightly builds of the OpenShift Origin LiveCD, that Krishna Raman has been doing great work on, but that is currently just an idea I've been pondering and I don't want to make any promises I am unable to deliver on. We'd like to make sure everyone does know we're continuing to brainstorm ideas, take suggestions and contributions, and make strides to continue our open source commitment to the community.

I suppose that's all for today, thanks to everyone who's been using OpenShift and becoming members of the OpenShift Origin community!

Happy hacking,
-AdamM

Wednesday, March 28, 2012

NetworkManager is in @core but don't fret ....

Recently there was a not so announced change to Fedora's @core and that is that NetworkManager is now a part of it. This initially will cause the traditional *nix admins to have a moment of "WTF?" but bear with me as I also had this reaction but I've slept on it and it clearly has yet to kill me. Lets address a couple things in a light hearted nature of the angry mob that is the internet:

WHY?!?!!@#^@!$#!@#$!%^ZOMG!#@!!11!1!$eleventyone!!?@!?@

    Well the motivation was simple, there was a bug and it was nasty and decisions needed making. One was made and at first you might not be on board with it, or you might never, but at face value its not really that bad since NetworkManager has made large stride in its capability and I think a large majority of previous concerns are no longer valid. The only outstanding issue that I've seen brought up is the need for multiple static route tables and if you're configuration is that advanced I imagine a "systemctl disable NetworkManager.service && chkconfig network on" is not going to cause you much pain and despair..... on to more rage!!! :)

NOW WHAT??? CONFIG, Y U NO WORK!!?!?!

   The first complaint I've heard so far is that the configuration methods from the days of lore utilizing the classic "network" utility powered by our favorite set of scripts will be no more. Alas! The wonderful and whimsical NetworkManager developers have kept this interest in mind and there is /etc/NetworkManager/NetworkManager.conf (which is extremely well documented in 'man 5 NetworkManager.conf') in which we see the line "plugins=ifcfg-rh" by default. What's this little gem mean? Well I'll quote the man page because I'm lazy and don't like rewriting what's already really well written:


ifcfg-rh
    plugin  is used on the Fedora and Red Hat Enterprise Linux distribu-
    tions to read and write configuration from the standard /etc/syscon-
    fig/network-scripts/ifcfg-*  files.   It  currently supports reading
    wired, WiFi, and 802.1x connections, but does not yet support  read-
    ing or writing mobile broadband, PPPoE, or VPN connections. To allow
    reading and writing of these add keyfile plugin to  your  configura-
    tion as well.


CONFIGURE ALL THE THINGS!!!1!#%$%^@#!#!

    I'm glad you asked. There is a nice little document located in /usr/share/doc/initscripts-*/ entitled sysconfig.txt that has been a life saver for me and if you weren't aware of this document I suggest you get familiar because it is full of extremely valuable information, but I digress. If you open this file and navigate to the section tagged 'NM_CONTROLLED' you will see the following:


NM_CONTROLLED=yes|no
      If set to 'no', NetworkManager will ignore this connection/device.
      Defaults to 'yes'.

What does this mean? Well ... if you did a fresh install, chances are your interfaces already have this but if you've got a machine that's been upgrading through the timeline to this point, if you just add that little line and fire up NetworkManager it will apply your settings for you. Magic! I know, pretty cool huh? :)

_____________________________ <--- the line where joking stops

In all seriousness, this is a change that is bound to ruffle some feathers but the reality is that with the pace in which Fedora moves it is sometimes best to take the path of least resistance in order to solve a problem and I think that's what's been done here. If there are any real concerns (I mean really valid concerns and not just "I hate change") then I like to believe that the greater community would be willing to entertain them in the proper channels of communication and if necessary then decisions made to fix issues can be altered/modified as needed to suite the needs of the project. I think we as faithful members of the community like to knee jerk react a little too much and I'm 100% guilty of this myself, but I think if we spend some time actually processing the scenarios, possible outcomes and attempt to include as many factors as possible we can see that inclusions of things like NetworkManager into @core aren't there just to add some packages for kicks but that there's actual valid reason. Fedora doesn't change for the sake of change, its changing to make a better project and platform for all to enjoy, and in this case its about squashing bugs so our experience is more pleasant other times it about innovation.... in the end, horray for another closed bug! :)

Happy hacking,
-AdamM

Wednesday, March 21, 2012

Dreams do come true ....

Dreams do come true .... as of April 2, 2012 - I WORK FOR RED HAT!!!!!