I have not done a general biography in a long time so figured I should put one out as a courtesy for people reading these blogs and emails I send out on various lists:

Who Am I?

My name is Stephen Smoogen, and I have been using computers for a very long time. According to a bug in a PHP website I was on years ago, I am over 400 years old which would mean I was born in Roanoke island with Virginia Dare. I think I slept a lot in the next 360 years as, according to my sister, my parents found me in a 'DO NOT RETURN TO SENDER' box outside their door. How she knew that when she is younger than me, I do not know.. but I have learned not to question her.

My first computer was a Heathkit Microcomputer Learning System ET-3400 that my Dad got at a swap meet when he was in the Navy in the 1970's. I had worked with my Dad on some other systems he fixed for coworkers but it was mostly being bored while watching an oscilloscope and moving probes around boards every now and then. When I wanted to get a computer in the early 1980's, he said I had to show that I could actually program it since an Apple ][ would have been a major investment for the family. I spent the summer learning binary, hexadecimal and doing the simple machine code that the book had in it. I also programmed a neighbour's Apple ][+ with every game I could in the public libraries Creative Computing 101 Basic Games. My mom and dad saved up for an Apple and we got an Apple ][e in 1983 which I then used through high school. The first thing I learned about the Apple ][e was how different it was with the ][+. The older systems came with complete circuit diagrams and chip layouts. It had been the reason my dad wanted to get an Apple because he knew he could fix it if a chip went bad. The ][e did not come with that and boy was Dad furious. "You don't buy a car with the engine welded shut. Don't buy a computer you can't work on." It seemed silly to me at the time, but would be a founding principle for what I do.

During those years, I went with my dad and his coworkers to various computer clubs where I learned how to play hack on a MicroVax running I think Ultrix or BSD. While I was interested in computers, I had decided I was going to university to get a degree in Astrophysics.. and the computers were just a hobby. Stubborn person that I am, I finally got the degree though I kept finding computers to be more enjoyable. I played nethack and learned more about Unix on a Vax 11/750 running BSD 4.1 and became a system administrator of a Prime 300 running a remote telescope project. I moved over to an early version of LynxOS on i386 and helped port various utilities like sendmail over to it for a short time.

After college I still tried to work in Astrophysics by being a satellite operator for an X-ray observation system at Los Alamos. However, I soon ended up administrating various systems to get them ready for an audit, and that turned into a full time job working on a vast set of systems. I got married, and we moved to Illinois where my wife worked on a graduate degree and I worked for a startup called Spyglass. I went to work for them because they had done scientific visualization which Los Alamos used.. but by the time I got there, the company had pivoted to being a browser company with Enhanced Mosaic.

For the next 2 years I learned what it is like to be a small startup trying to grow against Silicon Valley and Seattle. I got to administer even more Unix versions than I had before, and also see how Microsoft was going to take over the desktop. That was because Enhanced Mosaic was at the core of Internet Explorer. At the end of the two years, Spyglass had not gotten bought by Microsoft, and instead laid off the browser people to try and pivot once again as an embedded browser company at a different location. The company was about 15 years too soon for that as the smart items their plans had as the near future didn't start arriving until 2015 or so.

Without a job, I took a chance to work for another startup in North Carolina called Red Hat. At a Linux Conference, I had heard Bob Young give a talk about how you wouldn't buy a car with a welded bonnet and it brought back my dad's grumpiness with Apple decades ago. I realized that my work in closed source software had been one of continual grumpiness because I was welding shut the parts that other people needed open.

Because of that quote, I worked at Red Hat the next four years learning a lot about openness, startups and tech support. I found that the most important classes I had from my college were psychology and not computer science. I also learned that being a "smart mouthed know it all in" doesn't work when there are people who are much smarter and know a lot more. I think by the time I burned out on 4 years of 80 hour weeks, I was a wiser person than when I came.

I went to work elsewhere for the next 8 years, but came back to Red Hat in 2009, and have worked in the Fedora Project as a system administrator since then. I have seen 15 Fedora Linux releases go out the door, and come to really love working on the slowest part of Fedora, EPEL. I have also finally used some of the astrophysics degree as the thermodynamics and statistics have been useful with the graphs that various Fedora Project leaders have used to show how each release and how the community has continually changed.


Explaining disk speeds with straws

One of the most common user complaints in an Enterprise systems is 'why can't I have more disk space?' The idea is that they look at the costs of disks on Amazon or New Egg and see that they could get an 8 TB hard disk for $260.00 but the storage administrator says it will cost $26,000.00 for the same amount.

Years ago, I once even had someone buy me a disk and have it delivered to my desk to 'fix' the storage problem. They thought they were being funny so I thanked them for the paper weight. I then handed it back to them and then tried to explain to them why 1 drive was not going to help... I found that the developers eyes glistened over as I talked about RPM speeds of drives, cache sizes, amount of commands a ATA read/write use versus SCSI, etc. All of them are important but not terms useful for a person who just wants to never delete an email.

The best analogy I have is that you have a couple of 2 litre bottles of Coca Cola (fill in Pepsi, Fanta or Mr Pibb as needed) and a cocktail straw. You can only fill one Coke bottle with that straw. Sure the bottle is big enough but it takes a long time to move the soda from one to the other. That is what 1 SATA disk drive is like.

The next step is to add more disks and make a RAID array. Now you can get a bunch of empty coke bottles and empty out that one array through the multiple cocktail straws. Things are moving faster but it still takes a long time and you really can't use each of the large bottles as much as you like because emptying them out will be pretty slow via the cocktail straw.

The next sized solution is regular drinking straws with cans. The straws are bigger, but the cans are smaller.. you can fill the cans up or empty them without as much time waiting for a queue. However you need a lot more of them to equal the original bottle you are emptying. This is the SAS solution where the disks are smaller, faster, and much better throughput because of that. It is a tradeoff in that 15k drives use older technologies so store less data. They also have larger caches and smarter os's on the drive to make the straw bigger.

Finally there are the newest solution which would be the garden hose connected to a balloon to a coffee cup. This is the SAS SSD solution. The garden hose allows for a large amount of data to go up and down the pipe, the balloon is how much you can cache in case you are too fast somewhere in writes or reads. The coffee cup is because it is expensive and there isn't a lot of space. You need a LOT of coffee cups compared to soda cans or 2 litre bottles.

Most enterprise storage is some mixture of all of these to match the use case need.

  • SATA raid is useful for backups. You are going to sequentially read/ write large amounts of data to some other place. The straws don't need to be big per drive and you don't worry about how much is backed up. The cost per TB is of course the smallest.
  • SAS raid is useful for mixed user shared storage. The reads and writes to this need a larger straws because programs have different IO patterns. The cost per TB is usually an order or two of magnitude greater depending on other factors like how much redundancy you wanted etc.
  • SSD raid is useful for fast shared storage. It is still more expensive than SAS raid. 
And now to break the analogy completely. 

Software defined storage would be where you are using the cocktail straws with coke bottles but you have spread them around the building. Each time coke gets put on one, a hose spreads that coke around so each block of systems is equivalent. In this case the costs per system have gone down, but there needs to be a larger investment in the networking technology tying  the servers together. [A 1 gbit backbone network is like a cocktail straw between systems, A 10 gbit backbone is like a regular straw and the 40G/100G are the hoses.]

Now my question is .. has anyone done this in real life? It seems crazy enough that someone has done a video.. but my google-fu is not working tonight.


Ramblings about long ago and far away

My first job outside of college in 1994 was working at Los Alamos National Labs as a Graduate Research Assistant. It was supposed to be a post where I would use my bachelor's in Physics degree for a year until I became a graduate student somewhere. The truth was that I was burnt out of University and had little urge to go back. I instead used my time to learn much more about Unix system administration. It turned out the group I worked on had a mixture of SGI Irix's, Sun Sparcstations, HP, Convex, and I believe AIX. The systems had been run by graduate students for their professors and needed some central management. While I didn't work for the team that was doing that work, I spent more and more time working with them to get that in place. After a year, it was clear I was not going back to Physics, and my old job was ending. So the team I worked on gave me a reference to another place at the Lab where I began work. 

This network had even more Unix systems as they had NeXT cubes, old Sun boxes, Apollo, and some others I am sure to have forgotten. All of which needed a lot of love and care as they had been built for various Phd's and postdocs for various needs and then forgotten. My favorite box was one where the owner required that nearly every file was set 777. I had multiple emails which echo every comment people come up with Selinux in the last decade. If there was some problem on the system it was because it had a permission set.. and until it was shown it didn't work at 777 you could look at it being something else. [The owner was also unbelievably brilliant in other ways.. but hated arbitrary permission models.]

Any case, I got a lot of useful experience on all kinds of Unix systems, user needs, and user personalities. I also got to use Linux Softland Linux Systems (SLS) on a 486 with 4 MB of RAM running the linux kernel 0.99.4? and learn all kinds of things about PC hardware versus 'Real Computers'. The 486 was really an overclocked 386 with some added instructions that had been originally a Cyrix DX33 that had been relabeled with industrial whiteout as a 40MHz. It sort of worked at 40Mhz but was reliable only at 20Mhz. The issues with getting deals from Computer magazines.. sure the guy in the next apartment worked great.. mine was a dud.

I had originally run MCC (Manchester Computer Center Interim Linux) in college but when I moved it was easier to find a box of floppies with SLS so I had installed that on the 486. I would then download software source code from the internet and rebuild it for my own use using all the extra flags I could find in GCC to make my 20Mhz system seem faster. I instead learned that most of the options didn't do anything on i386 Linux at the time and most of my reports about it were probably met by eye-rolls with the people at Cygnus. My supposed goal was to try and set up a MUD so I could code up a text based virtual reality. Or to get a war game called Conquer working on Linux. Or maybe get xTrek working on my system. [I think I mostly was trying to become a game developer by just building stuff versus actually coding stuff. I cave-man debugged a lot of things using stuff I had learned in FORTRAN but it wasn't actually making new things.]

For years, I looked back on that time and thought it was a complete waste of time as I should have been 'coding' something. However I have come to realize I learned a lot about the nitty-gritty of hardware limitations. A 9600 baud Modem is not going to keep up with people on Ethernet playing xTrek. Moving it to a 56k modem later isn't going to keep up with a 56k partial T1. The numbers are the same but they are counting different things. A 5400 RPM IDE hard-drive is never going to be as good as 5400 RPM SCSI disks even if it is larger. 8 MB on a Sparc was enough for a MUD but on a PC it ran into problems because the CPU and MMU were not as fast or 'large'. 

All of this later became useful years later when I worked at Red Hat between 1997 and 2001. The customers at that time were people who had been using 'real Unix' hardware and were at times upset about how Linux didn't act the same way. In most cases it was the limitations of the hardware they had bought to put a system together, and by being able to debug that and recommend replacements, things improved. Being able to compare how a Convex used disks or an SGI graphics to the limitations of the old ISA and related buses helped show that you could redesign a problem to meet the hardware. [In many cases, it was cheaper to use N PC systems to replicate the behaviour of 1 Unix box but the problem needed to be broken in a way that it worked on N systems versus 1 box.] 

So what does this have to do with Linux today? Well mostly reminders to me to be less cranky with people who are 
  1. Having fun breaking things on their computers. People who want to tear apart their OS and rebuild it to something else are going to run into lots of hurdles. Don't tell them it was a stupid thing. The people at Cygnus may have rolled their eyes but they never told me to stop trying something else. Just read the documentation and see that it says 'undefined behavior' in a lot of places.
  2. Working with tiny computers to do stuff that you do on a bigger computer these days. It is very easy to think that because it is 'easier' and currently more maintainable to do a calculation on 1 large Linux box.. that you are wasting time on dozens of raspberry pis to do the same thing. But that is what the mainframers thought of the minicomputers, and the minicomputers thought of the Unix workstations, and the Unix thought of Linux on PC. 
  3. Seeming to spin around, not knowing what they are doing. I spent a decade doing that.. and while I could have been more focused.. I would have missed a lot of things that happened otherwise. Sometimes you need to do that to actually understand who you are. 


Please end Daylight Savings Time

This was going to be my 3rd article this week about something EPEL related, but I am having a hard time stringing any words together coherently. The following instead boiled out and I believe I have removed the profanity that leaked in.

So I like millions of other Americans (except those people blessed to be living in Arizona and Hawaii) are going through the week long jetlag when Daylight Savings Time starts. For the last 11? years the US has had DST start 2 weeks earlier than the rest of countries which observe this monstrosity, I think to show the rest of the  why it is a bad idea.  No one seems to learn and instead try to make it longer and longer.

I understand why it was begun during World War I in order to make electricity costs for lighting cheaper in factories. It just isn't solving that problem anymore. Instead I spend a week not really awake during the day, and for some reason as I get older not able to sleep at all during the night. And I get crankier and more sarcastic by the day. Finally sometime next Monday, I will conk out for 12-14 hours and be alright again. I would like to say I am anomaly, but this seems to happen to a lot of people around the world with higher numbers of heart attacks, strokes and accidents during the month of time changes.

So please, next time this comes up with your government (be it the EU, US, Canada Parliament, etc) write to your representatives that this needs to end. [For a fun read, the Wikipedia articles on various forms of daylight savings cover the political philandering to pay for this.]

Thank you for your patience while I whine about having only the lack of sleep when there are a hell of a lot worse things going on in the world.


How to test an update for EPEL

Earlier this week the maintainer for clamav came onto the Freenode #epel channel asking for testers of EL-6. There was a security fix needing to be pushed to stable, but no one had given the package any karma in bodhi.

EPEL tries to straddle the slow and steady world of Enterprise Linux and the fast and furious world of Fedora. This means that packages are usually held in epel-testing for at least 14 days or until the package has been tested by at least 3 people who give a positive score in bodhi . Because EPEL is a 'Stone Soup' set of packages, it does not have dedicated QA which test every update, but instead relies on what people bring to the table in the form of testing things if they need it. This has its benefits, but it does lead to problems where someone who wants to get a CVE fix out right away having to find willing testers or wait 14 days for the package to auto-promote.

Since I had used clamav years ago, and I needed an article to publish on Wednesday.. I decided I would give it a go. My first step was to find a system to test with. My main website still runs happily on CentOS-6 and I saw that while I had configured spamassassin with postfix I had not done so with clamav. This would make a good test candidate because I could roll back to an older setup if the package did not work.

First step was to install the clamav updates. Unlike my desktop where I have epel-testing always on, I keep the setup rather conservative on the web server. So to get the testing version of clamav I needed to the following:

# yum list --enable=epel-testing clamav*
Available Packages
clamav.i686                     0.99.4-1.el6                 epel-testing
clamav-db.i686                  0.99.4-1.el6                 epel-testing
clamav-devel.i686               0.99.4-1.el6                 epel-testing         
clamav-milter.i686              0.99.4-1.el6                 epel-testing
I then realized I had only configured clamav with sendmail in the past (yes it was a long time ago.. I watched the moon landings too.. and I can mostly remember what I had for breakfast). I googled through various documents and decided that a document at vpsget was a useful one to follow (Thank you to vpsget). Next up was to see if the packages listed had changed which they had not. So it is time to do an install:

# yum install --enable=epel-testing clamav clamstmp clamd
Loaded plugins: fastestmirror
Setting up Install Process
Loading mirror speeds from cached hostfile
 * base: mirrordenver.fdcservers.net
 * epel-testing: mirror.texas3006.com
 * extras: repos-tx.psychz.net
 * updates: centos.mirror.lstn.net
Resolving Dependencies
--> Running transaction check
Is this ok [y/N]:

I didn't use a -y here because I wanted to confirm that no large number of dependencies or other things were pulled in. It all looked good so I hit y and the install happened. I then went through the other steps and saw that there was a change in setup from when the document was written.

[root@linode01 smooge]# chkconfig --list | grep clam
clamd           0:off   1:off   2:off   3:off   4:off   5:off   6:off
clamsmtp-clamd  0:off   1:off   2:off   3:on    4:on    5:on    6:off
clamsmtpd       0:off   1:off   2:off   3:on    4:on    5:on    6:off

I turned on clamsmtp-clamd instead of clamd, and continued through the configs. After this I emailed an eicar to myself and saw it get blocked in the logs. I then looked at the CVE and saw if I could trigger a test against it. It didn't look like I could so I skipped that part. I then repeated the setup in a EL-6 VM I have at home to see that it worked there also. At this point it was time to report my findings in bodhi. I opened the report the owner had pointed me to and logged into the bodhi system. I added a general comment that I had tested it on 2 systems, and then plus 1'd the parts I had tested. Other people joined in and this package was able to get pushed much earlier than it would have previously.

There are currently 326 packages in EPEL-testing for EL-6. Please take the time to test 1 or two packages if you can. [I did pax-utils in writing this because I wanted to replicate the steps I had done. It needs 2 more people to test and it is a security fix also.]


2.5 Year Warning: EPEL-6 will be archived in January 2021.

EPEL builds packages against Red Hat Enterprise Linux versions which are in Production Phases 1,2,3. Currently RHEL-6 will reach this on November 30, 2020. At this point, EPEL will follow the steps it did with RHEL-5 to end of life EPEL-6.
  1. New builds will be stopped in the koji builders.
  2. Branching into EL-6 will be stopped in the Fedora src mechanism
  3. Packages in epel-6 testing will no longer be promoted to epel-6.
  4. After about 2 months, we will archive the packages to fedora archives and have the mirrors point to that. 
What does this mean for users of EPEL-6 currently? Nothing much beyond the fact that you should start planning on moving to newer versions of (RH)EL in the next 2.5 years. [This includes me because my main website runs on CentOS-6.] If your EL-6 is looking to be run past December 1, 2020, then you need to look at getting extended software contracts from Red Hat (or some consultant who is mad enough to do so). [Red Hat Enterprise Linux 6 was initially released in 2010, so it will have had 10 years support by then.]

What does this mean for the EPEL Steering Committee? We need to work out a better mechanism than we had in EL-5 for various packages which were end of lifed. Currently the build system composes each EPEL tree like it was a completely new distribution of packages. When a package is retired by its maintainer, the only way for a user to get that copy is to get the last released build from koji.fedoraproject.org versus from a mirror. This puts a lot more load on koji and also on users who have to try and figure out how to keep an old box going. 


Using the Red Hat Developer Toolset (DTS) in EPEL-7

One of the problems developers find in supporting packages for any long lifed Enterprise Linux is that it is harder and harder to compile newer software. Packages may end requiring newer compilers and other tools in order to be made. Back-porting fixes or updating software become harder and harder because the tools are no longer available to make the newer code work.

In the past, this has been a problem with EPEL packages as various software upstreams focus on newer toolkits to meet their development needs. This has lead to many packages to either be removed or left to mummify at some level. The problem occurs outside of EPEL also which is why Red Hat has created a product called Developer Toolset (DTS) which contains newer gcc and other tools. This product uses software collections which have had a mixed history with Fedora and EPEL but was considered useful in this limited use.

How to Use DTS in spec files

In order to use DTS in a spec file you will need to do the following:
  1. If you are not using mock and fedpkg to build packages, you will need to add either the Red Hat DTS channel to your system or if you are using CentOS/Scientific Linux, you can add the repository following these instructions.
  2. If you are using mock/fedpkg, the scl.org repository should be available in the epel mock configs.
  3. In the spec file add the following section to the top area:
    %if 0%{?rhel} == 7
    BuildRequires: devtoolset-7-toolchain, devtoolset-7-libatomic-devel
    Then in the build section add the following:
    %if 0%{?rhel}
    . /opt/rh/devtoolset-7/enable
  4. Attempt to do a build using your favorite build tool (rpmbuild, mock -r , fedpkg mockbuild, etc).  
This should start finding what things you might need to add in to the buildrequires similar problems. We in the EPEL Steering Committee would like to get feedback on this and work out what additions are needed to get this working for other developers. 


There are several caveats to using the Developer ToolSet in EPEL.
  1. Packages may only have a BuildRequire: on the packages in the DTS. If your package will need to Require: something in the DTS or Software-collections, it can NOT be in EPEL at this time as many users do not have this enabled or used.
  2. This is only for EPEL-7. At the moment, I have not set up DTS for EL-6 because it was not asked for recently. The Steering Committee would like to hear from developers if they want it enabled in EL-6.
  3. The architectures where DTS exists are: x86_64, ppc64le, and aarc64. There is no DTS for ppc64 and we do not currently have an EPEL for s390x.


Our thanks to Tom Callaway and many other developers for his patience on getting this working.


  • Originally the article stated that the text %if 0%{?rhel} == 7 should be used. That fails. The correct code is %if 0%{?rhel}
  • If you build with mock, you are restricted to only pulling in the DTS packages. Currently koji does not have this limitation which is being fixed.