http 297p.com'com

April 13, 2016
The CentOS Project is heading to , but we need your help! Since this event is kind of a big deal, we need to make sure our booth is appropriately dressed for the occasion. The theme for the community space around summit draws it’s inspiration from some popular , and we’d like to showcase our community’s creativity by having your designs at our both and possibly by handing them out as giveaways during the conference. We at the Project are better builders and sys-admins than designers. We’d love to see your suggestions and ideas.
If you have a design you’d like to see us use for the poster and booth work, you can submit it to the CentOS , or post it as a pull request to our . The designs need to be submitted by midnight(EST) on Monday, April 25th so that we have time to go through the submissions. Let’s see what sort of creativity is lurking out there!
April 07, 2016
An updated version of CentOS Atomic Host (version 7.) is now available for , featuring significant updates to docker (1.9.1) and to the . Version 1.9 of the atomic run tool now includes support for , for downloading and deploying
from all containers running on a host.
CentOS Atomic Host is a lean operating system designed to run Docker containers, built from standard CentOS 7 RPMs, and tracking the component versions included in Red Hat Enterprise Linux Atomic Host.
CentOS Atomic Host is available as a VirtualBox or libvirt-formatted Vagrant box, or as an installable ISO, qcow2 or Amazon Machine image. These images are available for download at . The backing ostree repo is published to .
CentOS Atomic Host includes these core component versions:
docker-1.9.1-25.el7.centos.x86_64
kubernetes-1.2.0-0.9.alpha1.gitb57e8bd.el7.x86_64
kernel-3.10.0-327.13.1.el7.x86_64
atomic-1.9-4.gitff44c6a.el7.x86_64
flannel-0.5.3-9.el7.x86_64
ostree-.atomic.el7.x86_64
etcd-2.2.5-1.el7.x86_64
cloud-init-0.7.5-10.el7.centos.1.x86_64
If you’re running a previous version of CentOS Atomic Host, you can upgrade to the current image by running the following command:
$ sudo atomic host upgrade
(414 MB) and
(426 MB) are Vagrant boxes for Libvirt and Virtualbox providers.
The easiest way to consume these images is via the Atlas / Vagrant Cloud setup (see /centos/boxes/atomic-host). For example, getting the VirtualBox instance up would involve running the following two commands on a machine with vagrant installed:
$ vagrant init centos/atomic-host && vagrant up –provider virtualbox
(731 MB) can be used via regular install methods (PXE, CD, USB image, etc.) and uses the Anaconda installer to deliver the CentOS Atomic Host. This image allows users to control the install using kickstarts and to define custom storage, networking and user accounts. This is the recommended option for getting CentOS Atomic Host onto bare metal machines, or for generating your own image sets for custom environments.
(1 GB) image is suitable for use in on-premise and local virtualized environments. We test this on OpenStack, AWS and local Libvirt installs. If your virtualization platform does not provide its own cloud-init metadata source, you can
NoCloud iso image.
Amazon Machine Images
ami-68710d08
ami-bb9616c8
eu-central-1
ami-f03fde9f
ap-southeast-1
ami-2da6734e
ap-northeast-1
ami-100f1f7e
ap-southeast-2
ami-b284a7d1
ap-northeast-2
ami-7a1dd414
ami-0668e76a
10efd11e9ab6c8b1228dda33bc6ed5 CentOS-Atomic-Host-7.1603-GenericCloud.qcow2
00a3c556eeaacc767a26a43e41d39a00a2 CentOS-Atomic-Host-7.1603-GenericCloud.qcow2.gz
1eafcc8cfe9b83de2c2ce4af CentOS-Atomic-Host-7.1603-GenericCloud.qcow2.xz
9f3b1b7a1f87c577cc350c36fb64b1f26dcc60bf21eb CentOS-Atomic-Host-7.1603-Installer.iso
f227bcb447f3de1800faf08eba CentOS-Atomic-Host-7.1603-Vagrant-Libvirt.box
bc451f55a53e1df83b22ffd867c35eaba2dfc6bfd8aecc748472 CentOS-Atomic-Host-7.1603-Vagrant-Virtualbox.box
Release Cycle
The CentOS Atomic Host image follows the upstream Red Hat Enterprise Linux Atomic Host cadence. After sources are released, they’re rebuilt and included in new images. After the images are tested by the SIG and deemed ready, we announce them.
Getting Involved
CentOS Atomic Host is produced by the , based on upstream work from . If you’d like to work on testing images, help with packaging, documentation — join us!
The SIG meets weekly on Thursdays at 16:00 UTC in the #centos-devel channel, and you’ll often find us in #atomic and/or #centos-devel if you have questions. You can also join the
mailing list if you’d like to discuss the direction of Project Atomic, its components, or have other questions.
Getting Help
If you run into any problems with the images or components, feel free to ask on the
mailing list.
Have questions about using Atomic? See the
mailing list or find us in the #atomic channel on Freenode.
February 26, 2016
An updated version of CentOS Atomic Host (version 7.) is now available for download. CentOS Atomic Host is a lean operating system designed to run Docker containers, built from standard CentOS 7 RPMs, and tracking the component versions included in Red Hat Enterprise Linux Atomic Host.
CentOS Atomic Host is available as a VirtualBox or libvirt-formatted Vagrant box, or as an installable ISO, qcow2 or Amazon Machine image. These images are available for download at . The backing ostree repo is published to .
CentOS Atomic Host includes these core component versions:
kernel-3.10.0-327.10.1.el7.x86_64
cloud-init-0.7.5-10.el7.centos.1.x86_64
atomic-1.6-6.gitca1e384.el7.x86_64
kubernetes-1.2.0-0.6.alpha1.git8632732.el7.x86_64
etcd-2.2.2-5.el7.x86_64
ostree-.atomic.el7.x86_64
docker-1.8.2-10.el7.centos.x86_64
flannel-0.5.3-9.el7.x86_64
If you’re running a previous version of CentOS Atomic Host, you can upgrade to the current image by running the following command:
$ sudo atomic host upgrade
(421 MB) and
(435 MB) are Vagrant boxes for Libvirt and Virtualbox providers.
The easiest way to consume these images is via the Atlas / Vagrant Cloud setup (see /centos/boxes/atomic-host). For example, getting the VirtualBox instance up would involve running the following two commands on a machine with vagrant installed:
$ vagrant init centos/atomic-host && vagrant up --provider virtualbox
(742 MB) can be used via regular install methods (PXE, CD, USB image, etc.) and uses the Anaconda installer to deliver the CentOS Atomic Host. This image allows users to control the install using kickstarts and to define custom storage, networking and user accounts. This is the recommended option for getting CentOS Atomic Host onto bare metal machines, or for generating your own image sets for custom environments.
(1 GB) image is suitable for use in on-premise and local virtualized environments. We test this on OpenStack, AWS and local Libvirt installs. If your virtualization platform does not provide its own cloud-init metadata source, you can
NoCloud iso image.
Amazon Machine Images
ami-059d1f69
ap-northeast-1
ami-c74644a9
ap-southeast-2
ami-bae8ced9
ami-3fb05d5f
ap-southeast-1
ami-6c4c850f
eu-central-1
ami-ce8663a1
ami-451ea236
ami-fd62129d
ami-e6d5e88c
ap-northeast-2
ami-5732fc39
d4e4272e589dfb8d979cdcdbdaee8b7abc9a09ff30d2 CentOS-Atomic-Host-7.1602-GenericCloud.qcow2
33bd4f732cbc6db29ae2a4d7d0b768fa5c5ab1c999e CentOS-Atomic-Host-7.1602-GenericCloud.qcow2.gz
ee9d9b4d7b0b87c8ad626e64ffedfd3f415a84cded CentOS-Atomic-Host-7.1602-GenericCloud.qcow2.xz
39a548f0d64dbfadd1bc56ca938b7dba38b73c2ea87 CentOS-Atomic-Host-7.1602-Installer.iso
2f965b2a502cdee5ee4ee1494ded58b418a309df060 CentOS-Atomic-Host-7.1602-Vagrant-Libvirt.box
bc976d197cac629fd68a6d8faf6bcfaeca8afdefae1581b CentOS-Atomic-Host-7.1602-Vagrant-Virtualbox.box
Release Cycle
The CentOS Atomic Host image follows the upstream Red Hat Enterprise Linux Atomic Host cadence. After sources are released, they’re rebuilt and included in new images. After the images are tested by the SIG and deemed ready, we announce them.
Getting Involved
CentOS Atomic Host is produced by the , based on upstream work from . If you’d like to work on testing images, help with packaging, documentation — join us!
The SIG meets weekly on Thursdays at 16:00 UTC in the #centos-devel channel, and you’ll often find us in #atomic and/or #centos-devel if you have questions. You can also join the
mailing list if you’d like to discuss the direction of Project Atomic, its components, or have other questions.
Getting Help
If you run into any problems with the images or components, feel free to ask on the
mailing list.
Have questions about using Atomic? See the
mailing list or find us in the #atomic channel on Freenode.
February 12, 2016
An updated version of CentOS Atomic Host (version 7.) is now . CentOS Atomic Host is a lean operating system designed to run Docker containers, built from standard CentOS 7 RPMs, and tracking the component versions included in Red Hat Enterprise Linux Atomic Host 7.2.
CentOS Atomic Host is available as a VirtualBox or libvirt-formatted Vagrant box, or as an installable ISO, qcow2 or Amazon Machine image. These images are available for download at . The backing ostree repo is published to .
CentOS Atomic Host includes these core component versions:
kernel-3.10.0-327.4.5.el7.x86_64
cloud-init-0.7.5-10.el7.centos.1.x86_64
atomic-1.6-6.gitca1e384.el7.x86_64
kubernetes-1.0.3-0.2.gitb9a88a7.el7.x86_64
etcd-2.1.1-2.el7.x86_64
ostree-.atomic.el7.x86_64
docker-1.8.2-10.el7.centos.x86_64
flannel-0.5.3-8.el7.x86_64
If you’re running a previous version of CentOS Atomic Host, you can upgrade to the current image by running the following command:
$ sudo atomic host upgrade
(416 MB) and
(428 MB) are Vagrant boxes for Libvirt and Virtualbox providers.
The easiest way to consume these images is via the Atlas / Vagrant Cloud setup (). For example, getting the VirtualBox instance up would involve running the following two commands on a machine with vagrant installed:
$ vagrant init centos/atomic-host && vagrant up --provider virtualbox
(737 MB) can be used via regular install methods (PXE, CD, USB image, etc.) and uses the Anaconda installer to deliver the CentOS Atomic Host. This allows flexibility to control the install using kickstarts and define custom storage, networking and user accounts. This is the recommended process for getting CentOS Atomic Host onto bare metal machines, or to generate your own image sets for custom environments.
(1 GB) is suitable for use in on-premise and local virtualized environments. We test this on OpenStack, AWS and local Libvirt installs. If your virtualization platform does not provide its own cloud-init metadata source, you can
NoCloud iso image. The Generic Cloud image is also available compressed in
(418 MB) and
Amazon Machine Images
ami-238d0e4f
ap-northeast-1
ami-b54d4adb
ap-southeast-2
ami-1f94747f
ap-southeast-1
eu-central-1
ami-40d6cd2c
ami-430aba30
ami-dae791ba
ap-northeast-2
ami-961fd1f8
eed698ac8ec03b32a55dddc8d54aca7d4 CentOS-Atomic-Host-7.-GenericCloud.qcow2
a7dde95e7d9a80c2eedec8ca42d3eda39 CentOS-Atomic-Host-7.-GenericCloud.qcow2.gz
9eca81dfc734dafe5a66b3adcdc CentOS-Atomic-Host-7.-GenericCloud.qcow2.xz
be3c1af37bd6b6c2fccca3a285ea40acabeaaee135 CentOS-Atomic-Host-7.-Installer.iso
e15ae21cdc0bd3fa88f8db2f6fdca0ece28c2bffdbb34f CentOS-Atomic-Host-7.-Vagrant-Libvirt.box
cedc51f070da8a7eac0260cb5a CentOS-Atomic-Host-7.-Vagrant-Virtualbox.box
Release Cycle
The CentOS Atomic Host image follows the upstream Red Hat Enterprise Linux Atomic Host cadence. After sources are released, they’re rebuilt and included in new images. After the images are tested by the SIG and deemed ready, we announce them.
Getting Involved
CentOS Atomic Host is produced by the , based on upstream work from . If you’d like to work on testing images, help with packaging, documentation — join us!
The SIG meets weekly on Thursdays at 16:00 UTC in the #centos-devel channel, and you’ll often find us in #atomic and/or #centos-devel if you have questions. You can also join the
mailing list if you’d like to discuss the direction of Project Atomic, its components, or have other questions.
Getting Help
If you run into any problems with the images or components, feel free to ask on the
mailing list.
Have questions about using Atomic? See the
mailing list or find us in the #atomic channel on Freenode.
February 09, 2016
The CentOS Project’s facebook group at
just went over 20,000 users ( its at 20,038 at the moment ). Great milestone and many thanks for all the support. Large chunks of credit goes to
Ljubomir Ljubojevic (
) for his time and curating the admin team on the Facebook group.
And a shout out to Bert Debruijn, Wayne Gray, Stephen Maggs, Eric Yeoh and the rest of the 20k users. Well done guys, next step is the 40k mark, but more importantly – keep up the great help and community support you guys provide each other.
February 08, 2016
Most of the time tuned-adm can set fairly good power states, but I’ve noticed that when I want powersave as the active profile, to try and maximize the battery life – it will still run with
In some cases, eg when on a plane and spending all the time in a text editor, thats not convenient ( either due to other apps running or when you really want to get that 7 hr battery life ).
On CentOS Linux 7, you can use a bit of a hammer solution in the form of /bin/cpupower – installed as a part of the kernel-tools rpm. This will let you force a specific cpu range with the frequency-set command with -d (min speed) and -u (max speed ) options, or just set a fixed rate with -f. As an example, here is what I do when getting on the plane
/bin/cpupower frequency-set -u 800Mhz
Stuff does get lethargic on the machine, but at 800Mhz and with all the external devices / interfaces / network bits turned off – I can still squeeze about 5 hrs of battery life from my X1 Carbon gen2 which has:
energy-full-design:
Ofcourse, you should still set “tuned-adm profile powersave” to get the other power save options, and watch powertop with your typical workload to get an idea on where there might be other tuning wins. And if anyone has thoughts on what to do when that battery capacity hits 50 – 60%… it does not look like the battery on this lenovo x1 is replaceable ( or even sold! ).
January 26, 2016
As a follow-up to last year’s literally-a-discussion-in-the-hallway about
with a few dozen folks at
2015, we’re doing a round table discussion with some of the same people and similar topics this Sunday at FOSDEM,
in the distro devroom. As a treat,
Smooge is not only a long-time Fedora and CentOS contributor, he is one of us who started EPEL a decade ago.
If you are an EPEL user (for whatever operating system), a packager, an upstream project member who wants to see your software in EPEL, a hardware enthusiast wanting to see builds for your favorite architecture, etc. … you are welcome to join us. We’ll have plenty of time for questions and issues from the audience.
The trick is that EPEL is useful or crucial for a number of the projects now releasing on top of CentOS via the special interest group process ( provide their community newer software on the slow-and-steady CentOS Linux.) This means EPEL is essential for work happening inside of the CentOS Project, but it remains a third-party repository. Figuring out all of the details of working together across the Fedora and CentOS projects is important for both communities.
Hope to see you there!
We have been building out a CentOS Community CI infra, that is open to anyone working on infra code or related areas to CentOS Linux, and have now onboarded a few projects. You can see the web ui ( jenkins! ) at .
Dusty has also put together a basic getting started guide, that also goes into some of the specifics on how and why the CentOS CI infra works the way it does, check it out at .
If you use the CentOS atomic host downstream build scripts at
you will want to note a major change in the downstream branch. The older build_ostree_components.sh script has now been replaced with 3 scripts:
builds_stage1.sh, build_stage2.sh and build_sign. Running build_stage1.sh followed by build_stage2.sh will give you exactly the same output as the old script used to.
The third script, build_sign.sh, now makes it easier to sign the ostree repo before any of the images are built. In order to use this, generate or import your gpg secure key, and drop the resulting .gpg file into /usr/share/ostree/trusted.gpg.d/ and edit the build_sign.sh script, edit the keyid at the end, and run the script after your build_stage1.sh is complete ( and before you run the build_stage2.sh ). You will notice a pinentry window popup, enter the password, and check for a 0 exit. Note that the gpg sign is a detached sign for the ostree commit.
January 17, 2016
With the latest release of CentOS-7, we have added several new
(AltArch) releases in addition to our standard x86_64 (x86 {Intel/AMD} 64-bit) architecture.
Architectures (aka arches) in Linux distributions refer to the type of CPU on which the distribution runs.
In the case of our standard release, it runs on x86 64-bit CPUs like Intel Pentium 64-bit and AMD 64-bit processors.
A few months ago, in the CentOS 7 (1503) release, we added the
(i686) as well as the
(aarch64) architectures to CentOS-7.
These two arches have been updated to our latest CentOS-7 release (1511).
We have additionally added 3 new architectures to our latest release.
(ppc64) and
(ppc64le).
Here is the .
These new architectures provide a long lived community based platform based on our x86_64 releases many new machine types.
The CentOS team is very excited to be able to provide our code base for these architectures and we need help from the community to make them all better.
We are hosting a CentOS Dojo in Brussels, Belgium on the 29th Jan 2016. Lots of the key people working on the AltArch builds will be present there and it would be a great forum to engage with these groups. You can get the details for the event , including the registration links. (Note: Registrations are currently closed, but we are trying to find more space, so they could open before the event)
We will also have a booth at , as well as talks in the Distributions , see you there.
December 16, 2015
With the release of 7.2, we’ve seen a rise in bugs filed for container build failures in docker. Not to worry, we have an explanation for what’s going on, and the solution for how to fix it.
The Problem:
You fire off a docker build, and instead of a shiny new container, you end up with an error message similar to:
Transaction check error:
file /usr/lib64/libsystemd-daemon.so.0 from install of systemd-libs-219-19.el7.x86_64 conflicts with file from package systemd-container-libs-208.20-6.el7.centos.x86_64
file /usr/lib64/libsystemd-id128.so.0 from install of systemd-libs-219-19.el7.x86_64 conflicts with file from package systemd-container-libs-208.20-6.el7.centos.x86_64
file /usr/lib64/libsystemd-journal.so.0 from install of systemd-libs-219-19.el7.x86_64 conflicts with file from package systemd-container-libs-208.20-6.el7.centos.x86_64
file /usr/lib64/libsystemd-login.so.0 from install of systemd-libs-219-19.el7.x86_64 conflicts with file from package systemd-container-libs-208.20-6.el7.centos.x86_64
file /usr/lib64/libudev.so.1 from install of systemd-libs-219-19.el7.x86_64 conflicts with file from package systemd-container-libs-208.20-6.el7.centos.x86_64
file /usr/lib64/security/pam_systemd.so from install of systemd-libs-219-19.el7.x86_64 conflicts with file from package systemd-container-libs-208.20-6.el7.centos.x86_64
This is due to the transition from systemd-container-* packages to actual systemd. For some reason, the upstream package doesn’t obsolete or conflict, and so you’ll get errors when installing packages.
The fix for this issue is very simple. Do a fresh docker pull of the base container.
# docker pull centos:latest
# docker pull centos:7
Your base container will now be at the appropriate level, and won’t have this conflict on package installs so you can simply run your docker build again and it will work.
But I have to use 7.1.1503!
If for some reason you must use a point-in-time image like 7.1.1503, then a package swap will resolve things for you. 7.1.1503 comes with fakesystemd, which you must exchange for systemd. To do this, execute the following command in your Dockerfile, prior to installing any packages:
RUN yum clean all && yum swap fakesystemd systemd
This will ensure you get the current package data, and will replace the fakesystemd package which is no longer needed. That’s all there is to solving the file conflicts and systemd dependency issues for CentOS base containers.
December 14, 2015
As CentOS 7 (1511) was released, I thought it would be a good idea to update several of my home machines (including kids' workstations) with that version, and also newer kernel.
Usually that's just a smooth operation, but sometimes some backported features/new features, especially in the kernel, can lead to some strange issues.
That's what happened for my older Thinkpad Edge : That's a cheap/small thinkpad that Lenovo did several years ago ( circa 2011 ), and that I used a lot just when travelling, as it only has a .
So basically not a lot of horse power, but still something convenient just to read your mails, remotely connect through ssh, or browse the web.
When rebooting on the newer kernel, it panics directly.
Two bug reports are open for this, one on the , linked also to the . Current status is that there is no kernel update that will fix this, but there is a easy to implement workaround :
boot with the initcall_blacklist=clocksource_done_booting kernel parameter added (or reboot on previous kernel)
once booted, add the same parameter at the end of the GRUB_CMDLINE_LINUX=" .." line , in the file /etc/default/grub
as root, run grub2-mkconfig -o /etc/grub2.conf
Hope it can help others too
December 05, 2015
We now have a CentOS Users and contributors group for the UK
( /CentOS-UK/ ), and I hosted the inaugural meetup over beer a few days back. It was a great syncup, and lots of very interesting conversations. One thing that always comes through at these meetings and I really appreciate is the huge diversity in the userbase, and the very different viewpoints and value propositions that people focus on into the CentOS Linux platform, and the larger ecosystem around it.
The main points that stuck with me over the evening were the CentOS Atomic Host ( https://wiki.centos.org/SpecialInterestGroup/Atomic/Download ) and the CentOS on ARM devices ( and the general direction of where ARM devices are going ). Stay tuned for more info on that in the next few weeks.
Looking forward now to the next London meetup ( likely 2nd week of Jan ’16 ), and also joining some meetings in other parts of the UK. Everyone is welcome to join, and I could certainly use help in organising meetups in other places around the UK. See you at a CentOS meetup soon.
November 30, 2015
Last friday, while working on something else (working on "CentOS 7 userland" release for Armv7hl boards), I got notifications coming from our
monitoring instance complaining about
failing (errors due to time outs) , and also then also about "Disk I/O is overloaded" triggers (checking the cpu iowait time). Usually you'd verify what happens in the Virtual Machine itself, but even connecting to the VM was difficult and slow. But once connected, nothing strange, and no real activity , not even on the disk (Plenty of tools for this, but
is helpful to see which process is reading/writing to the disk in that case), but iowait was almost at 100%).
As said, it was happening suddenly for all Virtual Machines on the same hypervisor (CentOS 6 x86_64 KVM host), and even the hypervisor was suddenly complaining (but less in comparison with the VMs) about iowait too. So obviously, it wasn't really something not being optimized at the hypervisor/VMS, but something else. That rang a bell, as if you have a raid controller, and that battery for example is to be replaced, the controller can decide to stop all read/write cache, so slowing down all IOs going to the disk.
At first sight, there was no HDD issue, and array/logical volume was working fine (no failed HDD in that RAID10 volume), so it was time to dive deeper into analysis.
That server has the following raid adapter :
03:00.0 RAID bus controller: LSI Logic / Symbios Logic MegaRAID SAS 2108 [Liberator] (rev 03)
That means that you need to use the
tool for that.
A quick MegaCli64 -ShowSummary -a0 showed me that indeed the underlying disk were active but I got my attention caught by the fact that there was a "Patrol Read" operation in progress on a disk. I then discovered a useful (bookmarked, as it's a gold mine)
explaining the issue with default settings and the "Patrol Read" operation.
While it seems a good idea to scan the disks in the background to discover disk error in advance (PFA), the default setting is really not optimized : (from that website) : "will take up to 30% of IO resources"
I decided to stop the currently running patrol read process with MegaCli64 -AdpPR -Stop -aALL and I directly saw Virtual Machines (and hypervisor) iowait going back to normal mode.
Here is the Zabbix graph for one of the impacted VM, and it's easy to guess when I stopped the underlying "Patrol read" process :
That "patrol read" operation is scheduled to run by default once a week (168h) so your real option is to either disable it completely (through MegaCli64 -AdpPR -Dsbl -aALL) or at least (adviced) change the IO impact (for example 5% : MegaCli64 -AdpSetProp PatrolReadRate 5 -aALL)
Never understimate the power of Hardware settings (in the BIOS or in that case raid hardware controller).
Hope it can help others too
November 24, 2015
Today we’re announcing an update to CentOS Atomic Host (version 7.), a lean operating system designed to run Docker containers, built from standard CentOS 7 RPMs, and tracking the component versions included in Red Hat Enterprise Linux Atomic Host. Please note that this release is based on content derived from the upstream 7.1 release.
CentOS Atomic Host is available as a VirtualBox or libvirt-formatted Vagrant box, or as an installable ISO, qcow2 or Amazon Machine image. These images are available for download at . The backing ostree repo is published to .
CentOS Atomic Host includes these core component versions:
kernel-3.10.0-229.20.1.el7.x86_64
cloud-init-0.7.5-10.el7.centos.1.x86_64
atomic-1.6-6.gitca1e384.el7.x86_64
kubernetes-1.0.3-0.2.gitb9a88a7.el7.x86_64
etcd-2.1.1-2.el7.x86_64
ostree-.atomic.el7.x86_64
docker-1.8.2-7.el7.centos.x86_64
flannel-0.2.0-10.el7.x86_64
If you’re running a previous version of CentOS Atomic Host, you can upgrade to the current image by running the following command:
$ sudo atomic host upgrade
(409 MB) and
(421 MB) are Vagrant boxes for Libvirt and Virtualbox providers.
The easiest way to consume these images is via the Atlas / Vagrant Cloud setup (). For example, getting the VirtualBox instance up would involve running the following two commands on a machine with vagrant installed:
$ vagrant init centos/atomic-host && vagrant up --provider virtualbox
(673 MB) can be used via regular install methods (PXE, CD, USB image, etc.) and uses the Anaconda installer to deliver the CentOS Atomic Host. This allows flexibility to control the install using kickstarts and define custom storage, networking and user accounts. This is the recommended process for getting CentOS Atomic Host onto bare metal machines, or to generate your own image sets for custom environments.
(934 MB) is suitable for use in on-premise and local virtualized environments. We test this on OpenStack, AWS and local Libvirt installs. If your virtualization platform does not provide its own cloud-init metadata source, you can
NoCloud iso image. The Generic Cloud image is also available compressed in
(408 MB) and
Amazon Machine Images
ami-39348e55
ap-northeast-1
ami-cec7e4a0
ap-southeast-2
ami-5e421b3d
ami-cb6878aa
ap-southeast-1
ami-49a4652a
eu-central-1
ami-f72b399b
ami-3c2ff54f
ami-48e88628
ami-19d59073
cf7c5e67e18a3aaa27d1c6ca62c80fb5e18a836a2c1e CentOS-Atomic-Host-7.-GenericCloud.qcow2 92cf36f528ae0ee0d0dd32ccf5f729f2c6c9a99a7471882effecaa CentOS-Atomic-Host-7.-GenericCloud.qcow2.gz 263c1f403c352d3fd241693caa12dbd0656a22cdc3f04ca3ca8d1 CentOS-Atomic-Host-7.-GenericCloud.qcow2.xz dfe0c85efff3adc75991aabc48ec8f8ad49dad44f8c51cfb8165 CentOS-Atomic-Host-7.-Installer.iso 139eb88d6a5d1a54aecb3fccfbbd33e4ec CentOS-Atomic-Host-7.-Vagrant-Libvirt.box 63ab56d08cdca7ee3cdd51aac3ff3d91a0 CentOS-Atomic-Host-7.-Vagrant-Virtualbox.box
Release Cycle
The CentOS Atomic Host image follows the upstream Red Hat Enterprise Linux Atomic Host cadence. After sources are released, they’re rebuilt and included in new images. After the images are tested by the SIG and deemed ready, we announce them.
Getting Involved
CentOS Atomic Host is produced by the , based on upstream work from . If you’d like to work on testing images, help with packaging, documentation — join us!
The SIG meets weekly on Thursdays at 16:00 UTC in the #centos-devel channel, and you’ll often find us in #atomic and/or #centos-devel if you have questions. You can also join the
mailing list if you’d like to discuss the direction of Project Atomic, its components, or have other questions.
Getting Help
If you run into any problems with the images or components, feel free to ask on the
mailing list.
Have questions about using Atomic? See the
mailing list or find us in the #atomic channel on Freenode.
November 19, 2015
Red Hat released their second point release to the EL7 series today. Most if not all of the sources seem to already be in place on git.centos.org, so we can start the rebuild and QA cycle. Red Hat release notes can be found
It is not yet decided, if we do a CR-release with the build packages, or if it will be a release with ISOs and all. Our reading of the errata released with 7.2 indicates no critical security update. We will post news on this matter here. Those errata can be found .
As Regulars know, this will take some time, and the next minor release of CentOS-7 will be done when it is done. So it can be either 7.1511 or 7.1512.
Stay tuned.
October 15, 2015
In late 2012 I constructed myself a bare bones cluster of a couple of motherboards, stacked up and powered, to be used as a dev cloud. It worked, but was a huge mess on the table, and it was certainly neither portable nor quiet. That didnt mean I would not carry it around – I did, across the atlantic a few times, over to Asia once. It worked. Then in 2014 I gave the stack away. Which caused a few issues, since living in a certain part of London means I must put up with a rather sad 3.5mbps adsl link from BT. Had I been living in a rural setting, government grants etc would ensure we get super high speed internet, but not in London.
I really needed ( since my work pattern had incorporated it ), my development and testing cluster back. Time to build a new one!
Late summer last year the folks at
kindly built me a cloud box, to my specifications. This is a single case, that can accommodate upto 8 mini-itx (or 6 mini-ATX, which is what i am using ) motherboards, along with all the networking kit for them and a disk each. Its not yet left the UK, but the box is reasonably well traveled in the country. If you come along to the CentOS Dojo, Belgium or the CentOS table at Fosdem, you should see it there in 2016. Here you can see the machine standing on its side, with the built in trolley for mobility.
Things to note here : you can see the ‘back’ of the box, with the power switches, the psu with its 3 hot swap modules, the 3 large case cooling fans and the cutout for the external network cable to go into the box. While there is only 1 psu, the way things are cabled inside the box, its possible to power upto 4 channels individually. So with 8 boards, you’d be able to power manage each pair on its own.
Here is the empty machine as it was delivered. The awesome guys at Protocase pre-plumbled in the psu, wired up the case fans ( there are 3 at the back, and 2 in the front. The ones in the front are wired from the psu so run all the time, where as the back 3 are connected as regular case-fan’s onto the motherboards, so they come up when the corresponding machine is running ) – I thought long and hard about moving the fans to the top/bottom but since the machine lives vertically, this position gives me the best airflow. On the right side, opposite from the psu, you can see 4 mounting points, this is where the network switch goes in.
Close up of the PSU used in this machine, I’ve load tested this with 6x i5 4690K boards and it works fine. I did test with load, for a full 24 hrs. Next time I do that, I’ll get some wattage and amp readings as well. Its rated for 950w max. I suspect anything more than 6 boards will get pretty close to that mark. Also worth keeping in mind is that this is meant to be a cloud or mass infra testing machine, its not built for large storage. Each board has its own 256gb ssd, and if i need additional storage, that will come over the network from a ceph/gluster setup outside.
The PSU output is split and managed in multiple channels, you an see 3 of the 4 here. Along with some of the spare case fan lines.
Another shot of the back 3 fans, you can also see the motherboard mounting points built into the base of the box. They put these up for a mini-itx / mini-ATX as well as regular ATX. I suspect its possible to get 4 ATX boards in there, but its going to be seriously tight and the case fans might need an upgrade.
Close up of the industrial trolley that is mounted onto the box ( its an easy remove for when its not needed, i just leave it on ).
The right side of the box hosts the network switch, this allows me to put the power cables on the left and back, with the network cables on the right and front. Each board has its own network port ( as they do.. ), and i use a usb3 to gbit converter at the back to give me a second port. This then allows me to split public and private networks, or use one for storage and another for application traffic etc. Since this picture was taken, I’ve stuck another 8 port switch on the front of this switch’s cover, to give me the 16 ports i really need.
Here is the rig with the first motherboard added in, with an intel i5 4960k cpu. The board can do 32 gb, i had 16 in it then, have upgraded since.
Now with everything wired up. There is enough space under the board to drive the network cables through.
And with a second board added in. This time an AMD fx-8350. Its the only AMD in the mix, and I wanted one to have the option to test with, the rest of the rig is all intels. The i5’s are a fewer cores, but overall with far better power usage patterns and run cooler. With the box fully populated, running a max load, things get warm in there.
The boards layer up on top of each other, In the picture above, the intel board is aligned to the top of box, the next tier board was aligned to the bottom side of the box. This gives the cpu fans a bit more head room, and has a massive impact on temperature inside the box. Initially, I had just stacked them up 3 on each side – ambient temperature under sustained load was easily touching 40 deg C in the box. Staggering them meant ambient temperature came down to 34 Deg C.
One key tip was
these fit right into the mother board mounting points, and run all the way through to the top of the box. You can then use nuts on the rod to hold the motherboard at whatever height you need.
If you fancy a box like this for yourself, give the guys at
a call and ask for Stephen MacNeil, I highly recommend their work. The quality of the work is excellent. In a couple of years time, I am almost certainly going to be back talking to them about the cloudybox2. And yes, they are the same guys who build the
update: the box runs pretty quiet. I typically only have 2 or 3 machines running in there, but even with all 6 running a heavy sustained load, its not massively loud, the airflow is doing its thing. Key thing there is that the front fans are set to ingest air – and they line up perfectly with the cpu placements, blowing directly at the heat sinks. I suspect the top most tier boards only get about 50% of the airflow compared to the lower two tiers, but they also get the least utilisation of the lot.
October 13, 2015
The Alternative Architecture Special Interest Group ( is happy to
the release the x86 32-bit version of CentOS Linux 7.
This architecture is also known as i386 or i686.
You can get this version of CentOS from the
This version of CentOS Linux 7 is for PAE capable 32 bit machines, including x86 based IOT boards similar to the Intel Edison.
It joins the 64-bit ARMv8 (aarch64) architecture as a fully released AltArch version.
Work within the AltArch SIG currently continues on the 32-bit ARMv7, 64-bit PPC little-endian, and 64-bit PPC big-endian architectures.
October 09, 2015
We typically push updates in batch’s. This might be anywhere from 1 update rpm to 100’s ( for when there is a big update upstream ), however most batches are in the region of 5 to 20 rpms. So how many batches have we done in the last year in a bit ? Here is a graph depicting our update batch release rate since Jan 1st 2014 till today.
I’ve removed the numbers from the release rate, and left the dates in since its the trending that most interesting. In a few months time, once we hit new years I’ll update this to split by year so its easy to see how 2015 compared with 2014.
You can click the image above to get a better view. The blue segment represents batches built, and the orange represents batches released.
October 06, 2015
You may have seen the announcement that CentOS Atomic Host 15.10 is now available ( if not, go read the announcement here :
You can get the Vagrant box’s for this image via the
process or just via direct downloads from
What I’ve also done this time is create a vagrant_aws box that references the AMIs in the regions they are published. This is hand crafted and really just a PoC like effort, but if its something people find helpful I can plumb this into the main image generation process and ensure we get this done for every release.
QuickStart
Once you have vagrant running on your machine, you will need the vagrant_aws plugin. You can install this with:
“vagrant plugin install aws”
and check its there with a
“vagrant plugin list“.
You can then add the box with “vagrant box add centos/atomic-host-aws“. Before we can instantiate the box, we need a local config with the aws credentials. So create a directory, and add the following into a Vagrantfile there :
Vagrant.configure(2) do |config|
config.vm.box = "centos/atomic-host-aws"
config.vm.provider :aws do |aws, override|
aws.access_key_id = "Your AWS EC2 Key"
aws.secret_access_key = "Your Secret Key"
aws.keypair_name = "Your keypair name"
override.ssh.private_key_path = "Path to key"
Once you have those lines populated with your own information, you should now be able to run
“vagrant up --provider aws”
It takes a few minutes to spin up the instance. Once done you should be able to “vagrant ssh” and use the machine. Just keep in mind that you want to terminate any unused instances, since stopping will only suspend it. A real vagrant destroy is needed to lose the ec2 resources.
Note: this box is setup with the folder sync’ feature turned off. Also, the ami’s per region are specified in the box itself, if you want to use a specific region just add a aws.region = ““, into your local Vagrantfile, everything else should get taken care of.
You can read more about the aws provider for vagrant here :
Let me know how you get on with this, if folks find it useful we can start generating these for all our vagrant images.
October 05, 2015
Today we’re announcing an update to CentOS Atomic Host (version 7.), a lean operating system designed to run Docker containers, built from standard CentOS 7 RPMs, and tracking the component versions included in Red Hat Enterprise Linux Atomic Host.
CentOS Atomic Host is available as a VirtualBox or libvirt-formatted Vagrant box, or as an installable ISO, qcow2 or Amazon Machine image. These images are available for download at . The backing ostree repo is published to .
CentOS Atomic Host includes these core component versions:
kernel-3.10.0-229.14.1.el7.x86_64
cloud-init-0.7.5-10.el7.centos.1.x86_64
atomic-1.0-115.el7.x86_64
kubernetes-1.0.3-0.1.gitb9a88a7.el7.x86_64
flannel-0.2.0-10.el7.x86_64
docker-1.7.1-115.el7.x86_64
etcd-2.1.1-2.el7.x86_64
ostree-.atomic.el7.x86_64
If you’re running a previous version of CentOS Atomic Host, you can upgrade to the current image by running the following command:
$ sudo atomic host upgrade
(389 MB) and
(400 MB) are Vagrant boxes for Libvirt and Virtualbox providers.
The easiest way to consume these images is via the Atlas / Vagrant Cloud setup (see . For example, getting the VirtualBox instance up would involve running the following two commands on a machine with vagrant installed:
vagrant init centos/atomic-host && vagrant up --provider virtualbox
(672 MB) can be used via regular install methods (PXE, CD, USB image, etc.) and uses the Anaconda installer to deliver the CentOS Atomic Host. This allows flexibility to control the install using kickstarts and define custom storage, networking and user accounts. This is the recommended process for getting CentOS Atomic Host onto bare metal machines, or to generate your own image sets for custom environments.
(393 MB) is suitable for use in on-premise and local virtualized environments. We test this on OpenStack, AWS and local Libvirt installs. If your virtualization platform does not provide its own cloud-init metadata source, you can
NoCloud iso image. The Generic Cloud image is also available
(391 MB) and
Amazon Machine Images
Region Image ID
------ --------
sa-east-1 ami-1b52c506
ap-northeast-1 ami-
ap-southeast-2 ami-43f2bb79
us-west-2 ami-73eaf043
ap-southeast-1 ami-346f7966
eu-central-1 ami-7ed1d363
eu-west-1 ami-3936034e
us-west-1 ami-6d9c5a29
us-east-1 ami-
a172195eae505bee137cd1f8c11a74c7cf94b0663cb2 CentOS-Atomic-Host-7.-GenericCloud.qcow2 33d338bb42ef916a40ac89adde9c121c98fbdf91b7 CentOS-Atomic-Host-7.-GenericCloud.qcow2.gz c944d3252aadc818ac42ae70dd8c2e72e CentOS-Atomic-Host-7.-GenericCloud.qcow2.xz 4e09f6dfae5024191fec9eab56adcb31c6b3ec37970 CentOS-Atomic-Host-7.-Installer.iso bcfbe8e18b6efddae9add9c88 CentOS-Atomic-Host-7.-Vagrant-Libvirt.box 8f626bdafaecb954ae3fab6a8a481da1b3ebb8f7acf6e84cf0ba65 CentOS-Atomic-Host-7.-Vagrant-Virtualbox.box
Release Cycle
The CentOS Atomic Host image follows the upstream Red Hat Enterprise Linux Atomic Host cadence. After sources are released, they’re rebuilt and included in new images. After the images are tested by the SIG and deemed ready, we announce them.
Getting Involved
CentOS Atomic Host is produced by the , based on upstream work from . If you’d like to work on testing images, help with packaging, documentation — join us!
The SIG meets weekly on Thursdays at 16:00 UTC in the #centos-devel channel, and you’ll often find us in #atomic and/or #centos-devel if you have questions. You can also join the
mailing list if you’d like to discuss the direction of Project Atomic, its components, or have other questions.
Getting Help
If you run into any problems with the images or components, feel free to ask on the
mailing list.
Have questions about using Atomic? See the
mailing list or find us in the #atomic channel on Freenode.
October 01, 2015
The software collections special interest group (
has been making great progress and have finished their initial bootstrap process. They are now getting ready to do a mass build for test and release. I’ve just delivered their rpm signing key, so we are pretty close to seeing content in mirror.centos.org.
As an initial goal, they are working on and delivering rpms – but in parallel efforts are on to get container images into the registries as well, so folks using containers today are able to consume the software collections in either format.
The effort is being co-ordinated by
Honza Horak (
), and he’s the best person to get in touch with to join and help.
September 23, 2015
Recently I had (from an Infra side) to start deploying KVM guests for the
arches, so that
SIGs contributors could start bootstrapping CentOS 7 rebuild for those arches. I'll probably write a tech review about
and the fact you can just use libvirt/virt-install to quickly provision new VMs on
, but I'll do that in a separate post.
Parallel to ppc64/ppc64le,
interested some Community members, and the discussion/activity about that arch is discussed on the . It's slowly coming and some users already reported having used that on some boards (but still unsigned and no updates packages -yet- )
Last (but not least) in this AltArch list is i686 :
built all packages and are already publicly available on
, each time in parallel to the x86_64 version. It seems that respinning the ISO for that arch and last tests would be the only things to do.
If you're interested in participating in AltArch (and have special interesting a specific arch/platform), feel free to discuss that on the
September 16, 2015
So, thanks to the folks from Opennebula, we'll have another CentOS Dojo in Barcelona on Tuesday 20th October 2015. That even will be colocated with the
happening the days after that Dojo. If you're attending the OpennebulaConf, or if you're just in the area and would like to attend the CentOS Dojo, feel free to
Regarding the Dojo content, I'll be myself giving a presentation about Selinux : covering a little bit of intro (still needed for some folks afraid of using it , don't know why but we'll change that ...) about selinux itself, how to run it on bare-metal, virtual machines and there will be some slides for the mandatory container hype thing.
But we'll also cover managing selinux booleans/contexts, etc through your config management solution. (We'll cover
as those are the two I'm using on a daily basis) and also how to build and deploy custom selinux policies with your config management solution.
On the other hand, if you're a CentOS user and would like yourself to give a talk during that Dojo, feel free to submit a talk ! More informations about the Dojo on the
See you there !
September 10, 2015
Jason just announced our second stable CentOS Atomic Host release at
I’m very excited about this one, and its not only because I’ve helped make it happen – but this is also the first time a SIG in the CentOS Ecosystem has done a full release, from rpms, to images, to hosted vendor space ( AMI’s in 9 regions on Amazon’s EC2 ).
One of the other things that I’ve been really excited about is that this is the first time we’ve used the rpm-sign infra that I’ve been working on these past few days. It allows SIG built content ( rpms or images or ISOs or even text ) to be signed with pre-selected keys. And do this without having to compromise the key trust level. I will blog more around this process and how SIGs can consume these keys, and how this maps to the TAG model being used in cbs.centos.org
for now, go get started with the CentOS Atomic Host!
We have a dojo coming up in Barcelona, co-located with the OpenNebula conference in late October. The event is going to run from 1:30pm to 6:30pm ( but I suspect it wont really end till well into the early hours of the morning as people keep talking about CentOS things over drinks, dinner, more drinks etc ! ).
You can get the details, including howto register at .
Fabian is going to be there, and we are talking to a great set of potential speakers – the focus is going to be very much on hands on learning about technologies on and around CentOS Linux! And as in the past, we expect content to be sysadmin / operations folks specific rather than developers ( although, we highly encourage developers to come along as well, and talk to us and share their experiences with the sysadmin world! ).
Because of what I do and how / where I do it, there are always online, realtime conversations going on ( irc or IM ); and its never really been a huge issue except for people in the US pacific coast. Its always a case of them starting work when I am finishing for the day, and even when i work late at night for the odd hours, its almost always whack in the middle of their lunch hours. And they finish work, even their late night sessions, just about when I am getting started for the day.
So to everyone on that TZ, just want to remind everyone that the best thing to do is stick with emails. I know its fashionable these days to complain about emails and all that, but by and large there is no other means of comms around these days that is easier to get to, mature and really very productive for async conversations. The other thing to keep in mind is that while there are other services and ideas floating around that help solve specific challenges that email isnt best suited for, none of them do a good enough job to remove the email process from the equation. So if we are still going to have email knocking about, lets just use it.
And I’m not ignoring people on irc
but with 300+ panes in irssi, sometimes it can get hectic and I will often encourage you to ‘Lets Move to Mail’. Its not because I dont want to have the convo right now, its because I want to have the complete conversation!
Today we’re releasing a significant update to the CentOS Atomic Host (version 7.), a lean operating system designed to run Docker containers, built from standard CentOS 7 RPMs, and tracking the component versions included in Red Hat Enterprise Linux Atomic Host.
CentOS Atomic Host is available as a VirtualBox or libvirt-formatted Vagrant box, as an installable ISO image, as a qcow2 image, or as an Amazon Machine Image. These images are available for download at . The backing ostree repo is published to .
Currently, the CentOS Atomic Host includes these core component versions:
kernel 3.10.0-229
docker 1.7.1-108
kubernetes 1.0.0-0.8.gitb2dafda
etcd 2.0.13-2
flannel 0.2.0-10
cloud-init 0.7.5-10
atomic 1.0-108
If you’re running the version of CentOS Atomic Host that shipped in June, you can upgrade to the current image by running the following command:
$ sudo atomic host upgrade
If you’re currently run the older, test version of the CentOS Atomic Host, or if you’re running any other atomic host (from any distro or release in the past), you can rebase to this released CentOS Atomic Host by running the following two commands :
$ sudo ostree remote add centos-atomic-host http://mirror.centos.org/centos/7/atomic/x86_64/repo
$ sudo rpm-ostree rebase centos-atomic-host:centos-atomic-host/7/x86_64/standard
(393 MB) and
(404 MB) are Vagrant boxes for Libvirt and Virtualbox providers.
The easiest way to consume these images is via the Atlas / Vagrant Cloud setup (see . For example, getting the VirtualBox instance up would involve running the following two commands on a machine with vagrant installed:
$ vagrant init centos/atomic-host && vagrant up --provider virtualbox
(682 MB) can be used via regular install methods (PXE, CD, USB image, etc.) and uses the Anaconda installer to deliver the CentOS Atomic Host. This allows flexibility to control the install using kickstarts and define custom storage, networking and user accounts. This is the recommended process for getting CentOS Atomic Host onto bare metal machines, or to generate your own image sets for custom environments.
(922 MB) is suitable for use in on-premise and local virtualized environments. We test this on OpenStack, AWS and local Libvirt installs. If your virtualization platform does not provide its own cloud-init metadata source, you can
NoCloud iso image. The Generic Cloud image is also available
(389 MB) and
(303 MB) formats.
Amazon Machine Images
ami-47ed785a
ap-northeast-1 ami-
ap-southeast-2 ami-05511e3f
ami-a5b6ab95
ap-southeast-1 ami-dc99938e
eu-central-1
ami-8c111191
ami-0b6e4d7c
ami-69679d2d
ami-69c3aa0c
a132d029a646cecf0c71cc42f0a10dc0c CentOS-Atomic-Host-7.-GenericCloud.qcow2
aad5d39e902b068c7373aac3f7dc9b2c962a6ac0fe CentOS-Atomic-Host-7.-GenericCloud.qcow2.gz
ce7f13bfe43e03e47ca433dbd1831d2cbe0 CentOS-Atomic-Host-7.-GenericCloud.qcow2.xz
bdaf877cffe3b CentOS-Atomic-Host-7.-Installer.iso
b38c6e6c4aca8ab90fc41 CentOS-Atomic-Host-7.-Vagrant-Libvirt.box
bdcfaf4f345daeea7c04f057c0ab6e29bfef3c82eab CentOS-Atomic-Host-7.-Vagrant-Virtualbox.box
Release Cycle
The rebuild image will follow the upstream Red Hat Enterprise Linux Atomic Host cadence. After sources are released, they’ll be rebuild and included in new images. After the images are tested by the SIG and deemed ready, they’ll be announced. If you’d like to help with the process, there’s plenty to do!
Getting Involved
CentOS Atomic Host is produced by the , based on upstream work from . If you’d like to work on testing images, help with packaging, documentation, or even help define the direction of our monthly release — join us!
The SIG meets weekly on Thursdays at 16:00 UTC in the #centos-devel channel, and you’ll often find us in #atomic and/or #centos-devel if you have questions. You can also join the
mailing list if you’d like to discuss the direction of Project Atomic, its components, or have other questions.
Getting Help
If you run into any problems with the images or components, feel free to ask on the
mailing list.
Have questions about using Atomic? See the
mailing list or find us in the #atomic channel on Freenode.
September 09, 2015
In the last days, I encountered a strange issue^Wlimitation with
that I wouldn't have thought of. I've used ext2/ext3/ext4 for quite some time and so I've been used to resize the filesystem "online" (while "mounted"). In the past you had to use
for that, then it was integrated into
The logic is simple and always the same : extend your underlaying block device (or add another one), then modify the LVM Volume Group (if needed), then the Logical Volume and finally the resize2fs operation, so something like
lvextend -L +${added_size}G /dev/mapper/${name_of_your_logical_volume}
resize2fs /dev/mapper/${name_of_your_logical_volume}
I don't know how much times I've used that, but this time resize2fs wasn't happy :
resize2fs: Operation not permitted While trying to add group #16384
I remember having had in the past an issue because of the journal size . But this wasn't the case here.
FWIW, you can always verify your journal size with dumpe2fs /dev/mapper/${name_of_your_logical_volume} |grep "Journal Size"
Small note : if you need to increase the journal size, you have to do it "offline" as you have to remove the journal and then add it back with a bigger size (and that also takes time) :
umount /$path_where_that_fs_is_mounted
tune2fs -O ^has_journal /dev/mapper/${name_of_your_logical_volume}
# Assuming we want to increase to 128Mb
tune2fs -j -J size=128 /dev/mapper/${name_of_your_logical_volume}
But in that case, as said, it wasn't really the root cause : while the resize2fs: Operation not permitted doesn't give much informations, dmesg was more explicit :
EXT4-fs warning (device dm-2): ext4_group_add: No reserved GDT blocks, can't resize
The limitation is that when the initial Ext4 filesystem is created, the number of reserved/calculated GDT blocks for that filesystem will allow to grow it by a .
Ouch, that system (CentOS 6.7) I was working on had been provisioned in the past for a certain role, and that particular fs/mount point was set to 2G (installed like this through the
). But finally role changed and so the filesystem has been extended/resized some times, until I tried to extend it to more than 2TiB, which then caused resize2fs to complain ...
So two choices :
you do it "offline" through umount, e2fsck, resize2fs, e2fsck, mount (but time consumming)
you still have plenty of space in the VG, and you just want to create another volume with correct size, format it, rsync content, umount old one and mount the new one.
That means that I learned something new (one learns something new every day !), and also the fact that you then need to take that limitation in mind when using a kickstart (that doesn't include the
option, but a fixed size for the filesystem).
Hope that it can help
September 02, 2015
As some initiatives (like
as one example) try to force
usage everywhere. We thought about doing the same for the CentOS.org infra. Obviously we already had some
certificates, but not for every httpd server that was serving content for CentOS users. So we decided to
TLS usage on those servers. But TLS can be used obviously on other things than a web server.
That's why we considered implementing something for our
nodes. The interesting part is that it's really easy (depending of course at the security level one may want to reach/use). There are two parts in the postfix main.cf that can be configured :
outgoing mails (aka your server sends mail to other SMTPD servers)
incoming mails (aka remote clients/servers send mail to your postfix/smtpd server)
Let's start with the client/outgoing part : just adding those lines in your main.cf will automatically configure it to use TLS when possible, but otherwise fall back on clear if remote server doesn't support TLS :
# TLS - client part
smtp_tls_CAfile=/etc/pki/tls/certs/ca-bundle.crt
smtp_tls_security_level = may
smtp_tls_loglevel = 1
smtp_tls_session_cache_database = btree:/var/lib/postfix/smtp_scache
The interesting part is the smtp_tls_security_level option : as you see, we decided to force it to may . That's what Postfix
calls "Opportunistic TLS" : in some words it will try TLS (even with untrusted remote certs !) and will only default to clear if no remote TLS support is available. That's the option we decided to use as it doesn't break anything, and even if the remote server has a self-signed cert, it's still better to use TLS with self-signed than clear text, right ?
Once you have reloaded your postfix configuration, you'll directly see in your maillog that it will start trying TLS and deliver mails to servers configured for it :
3 07:50:37 mailsrv postfix/smtp[1936]: setting up TLS connection to ASPMX.[173.194.207.27]:25
3 07:50:37 mailsrv postfix/smtp[1936]: Trusted TLS connection established to ASPMX.[173.194.207.27]:25: TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)
3 07:50:37 mailsrv postfix/smtp[1936]: DF584A00774: to=&&, orig_to=&&, relay=ASPMX.[173.194.207.27]:25, delay=1, delays=0/0.12/0.22/0.71, dsn=2.0.0, status=sent (250 2.0.0 OK
79siqku.67 - gsmtp)
Now let's have a look at the other part : when you want your server to present the STARTTLS feature when remote servers/clients try to send you mails (still in postfix main.cf) :
# TLS - server part
smtpd_tls_CAfile=/etc/pki/tls/certs/ca-bundle.crt
smtpd_tls_cert_file = /etc/pki/tls/certs/&%= postfix_myhostname %&-postfix.crt
smtpd_tls_key_file = /etc/pki/tls/private/&%= postfix_myhostname %&.key
smtpd_tls_security_level = may
smtpd_tls_loglevel = 1
smtpd_tls_session_cache_database = btree:/var/lib/postfix/smtpd_scache
Still easy, but here we also add our key/cert to the config but if you decide to use a signed by a trusted CA cert (like we do for centos.org infra), be sure that the cert is the concatenated/bundled version of both your cert and the CAChain cert. That's also documented in the , and if you're already using , you already know what I'm talking about as you
If you've correctly configured your cert/keys and reloaded your postfix config, now remote SMTPD servers will also (if configured to do so) deliver mails to your server through TLS. Bonus point if you're using a cert signed by a trusted CA, as from a client side you'll see this :
2 16:17:22 hoth postfix/smtp[15329]: setting up TLS connection to mail.centos.org[72.26.200.203]:25
2 16:17:22 hoth postfix/smtp[15329]: Trusted TLS connection established to mail.centos.org[72.26.200.203]:25: TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)
2 16:17:23 hoth postfix/smtp[15329]: CC: to=&fake_one_for_blog_post@centos.org&, relay=mail.centos.org[72.26.200.203]:25, delay=1.6, delays=0.19/0.03/1.1/0.31, dsn=2.0.0, status=sent (250 2.0.0 Ok: queued as A)
The Trusted TLS connection established part shows that your smtpd server presents a correct cert (bundle) and that the remote server sending you mails trusts the CA used to sign that cert.
There are a lot of TLS options that you can also add for tuning/security reasons, and all can be seen through postconf |grep tls, but also on the Postfix
Last updated: April 26,

我要回帖

更多关于 www.xiumi297.com 的文章

 

随机推荐