CentOS

Migrating my server from FreeBSD to CentOS. I’ve been on FreeBSD since the beginning in 1997, but a couple of things make it no longer suit my needs. Primarily it is the firewall that is built in. Neither IPF or PF does a lot of the things I need it to do. But, iptables will do everything I want and then some. Couple of other reasons but that is the main one.

On this page I’ll be keeping track of the various issues I come across while doing the setup and how to solve them. Primarily for my own reference, but it might help others as well.

Will be setting up Serviio, sickbeard, sabnzdb, postfix, bind, dovecot, apache, and spamassassin, Horde groupware. Been using exim for years, but I’m going to try postfix again.

2 Responses to CentOS

  1. robbob2112 says:

    First CentOS:

    Tried a variety of OS before settling on this.

    Ubuntu – several times and different version – It works great as a desktop machine and even as a media server, but not so much as a web server. It can be made to work, but constant updates for all sorts of things are not so good. New things added all the time are not so good, once a server is running, I just do the security updates and leave it at that. At least twice an update caused the system to be unbootable without booting a CD and recovering it manually.

    Fedora – Similar to Ubuntu – great as a desktop and newer machine, but the stream of updates and short support cycle aren’t so great on a server.

    RHEL – great for server, but costs and since this is just for me and family no reason to spend the $$ on it.

    CentOS – recompiled RHEL – no fees, but the updates and patches are a day or two behind. So long as I stay on top of it the only real risk is zero-day issues. Works good as a desktop as well on older hardware.

    Decision is CentOS.

    As with any new OS, I re-installed it about 20 times on different machines to figure out the various issues and details. Converted one of my work desktops to it from Ubuntu and built/rebuilt an old system here at the house a few times.

    First, for UEFI boot you have to use the DVD1 image file or prepare a flash drive with the correct software. I went with the DVD image because it was simple and the server has a drive in it.

    Boot the DVD in UEFI mode and do the install using software raid1 mirrors. Using a pair of 500gb WD RE4 drives in the end, but currently they are still in the FreeBSD machine so installing on a pair of WD 500gb drives and will one at a time break the mirrors and replace them with the RE4 after everything is on it.

    Details of installing can be found elsewhere so won’t go into that here. Basic philosophy is to install as mininal and add things in as I need them.

    I won’t use the desktop mostly, but it sure is handy at times so I installed that. Also, the serviio console can connect remotely, but it is much simpler to configure when on the local system.

    Ran into big issues with the display driver. I have an Nvidia 2xx series card in the machine and the default nouveau drive didn’t play nice at all.

    Disable nouveau for boot:

    # vi /etc/modprobe.d/blacklist.conf
    # blacklist nouveau
    # mv /boot/initramfs-$(uname -r).img /boot/initramfs-$(uname -r).img.bak
    # dracut -v /boot/initramfs-$(uname -r).img $(uname -r)
    # reboot

    Now it doesn’t hang at boot time on starting X. Download the Nvidia linux driver and install it then reboot again and everything is good with that.

    Next,

    Add the epel repository – I needed a bunch of things out of this one but for now gparted and gdisk. Why? because next is to build the raid5 array using 4 x 3TB disks. Was going to use 5 disks, but one failed.

    First nuke whatever boot record is on the disks, sdc, sdd, sde, and sdf
    # dd if=/dev/null of=/dev/sdc bs=512 count=4
    # dd if=/dev/null of=/dev/sdd bs=512 count=4
    # dd if=/dev/null of=/dev/sde bs=512 count=4
    # dd if=/dev/null of=/dev/sdf bs=512 count=4

    Next, these are 4k sector WD green drives and if you mis-align the starting sector it cuts the performance in half.

    Use parted and create a GPT and then lay down the partition and check it
    # parted /dev/sdc
    (parted) mklable gpt
    (parted) mkfs 1 ext2
    (parted) set 1 raid on
    (parted) align-check optimal 1
    (parted) quit

    repeat for the other drives so you have them all identical

    # mdadm –create /dev/md127 –chunk=1024 –level=5 –raid-devices=4 /dev/sdc1 /dev/sdd1 /dev/sde1 /dev/sdf1

    Check progress
    # watch -n .1 cat /proc/mdstat

    and wait – took mine 10 hours to build. There is an option in create to “assume clean” that I read about that creates without doing the sync, but I didn’t research or use it.

    Now is the time to do LVM if you want to, in my case I don’t have any reason so left it.

    -m 0 – don’t reserve any space for root only
    block size (file system block size, ex. 4096)
    stripe size (same as mdadm chunk size, ex. 1024k)
    stride: stripe size / block size (ex. 1024k / 4k = 256)
    stripe-width: stride * #-of-data-disks (ex. 4 disks RAID 5 is 3 data disks; 256*3 = 768)

    mkfs.ext4 -m 0 -b 4096 -E stride=256,stripe-width=768 /dev/md127

    # mkfs.ext4 -m 0 -b 4096 -E stride=256,stripe-width=768 /dev/md127
    mke2fs 1.41.12 (17-May-2010)
    Filesystem label=
    OS type: Linux
    Block size=4096 (log=2)
    Fragment size=4096 (log=2)
    Stride=256 blocks, Stripe width=768 blocks
    549404672 inodes, 2197599744 blocks
    0 blocks (0.00%) reserved for the super user
    First data block=0
    Maximum filesystem blocks=4294967296
    67066 block groups
    32768 blocks per group, 32768 fragments per group
    8192 inodes per group
    Superblock backups stored on blocks:
    32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
    4096000, 7962624, 11239424, 20480000, 23887872, 71663616, 78675968,
    102400000, 214990848, 512000000, 550731776, 644972544, 1934917632

    Writing inode tables: done
    Creating journal (32768 blocks): done
    Writing superblocks and filesystem accounting information:
    done
    This filesystem will be automatically checked every 20 mounts or
    180 days, whichever comes first. Use tune2fs -c or -i to override.

    Now, add it to /etc/fstab – need the UUID
    # blkid /dev/md127
    /dev/md127: UUID=”f946b386-295f-4d4b-b7f9-7559fceabcbc” TYPE=”ext4″

    vi /etc/fstab and add:
    UUID=f946b386-295f-4d4b-b7f9-7559fceabcbc /data ext4 defaults 1 2

    #mkdir /data
    # mount /data
    #df -h /data
    Filesystem Size Used Avail Use% Mounted on
    /dev/md127 8.1T 11G 8.1T 1% /data

    test write speed
    # dd if=/dev/zero of=/data/test.out bs=1M count=10240
    10240+0 records in
    10240+0 records out
    10737418240 bytes (11 GB) copied, 93.7925 s, 114 MB/s

    test read speed (in cache)
    # dd if=/data/test.out of=/dev/null bs=1M
    10240+0 records in
    10240+0 records out
    10737418240 bytes (11 GB) copied, 1.1452 s, 9.4 GB/s

    test read speed out of cache (larger file)
    # dd if=/data/test.out of=/dev/null bs=1M
    20480+0 records in
    20480+0 records out
    21474836480 bytes (21 GB) copied, 60.6907 s, 354 MB/s

    And done… time to copy data back to the array from the external backup drives.

    • robbob2112 says:

      And now I added a 5th drive into the array today

      Just use parted again this time of /dev/sdg and create a partition table.. same as above.. mklable gpt .. .. but this time I followed it up with mkpart … used the interactive menu.. answers are 1, primary, 1049kB, -1 … it creates the partiton with the proper offset to match the other drives… in this case it is a seagate drive.. but still works the same..

      # parted /dev/sdg
      GNU Parted 2.1
      Using /dev/sdg
      Welcome to GNU Parted! Type ‘help’ to view a list of commands.
      (parted) print
      Model: ATA ST3000DM001-9YN1 (scsi)
      Disk /dev/sdg: 3001GB
      Sector size (logical/physical): 512B/4096B
      Partition Table: gpt

      Number Start End Size File system Name Flags
      1 1049kB 3001GB 3001GB serviio raid

      (parted)

      Then add to the array – note it is quicker to resync if you take it offline

      # mdadm –grow –raid-devices=5 /dev/md127

      took about 26 hours to alter the array because I left it online..

      After it was altered… have to increase the filesystem out to the extent of the new array size

      # umount /data
      # resize2fs /dev/md127

      # mount /data

      and now it is all good and 12Tb instead of 9Tb

      Side notes:
      Added an internal bitmap – should help shorten any future resyncs … might should have done this as an external bitmap on the boot drives, but for my normal day to day size is more important than speed.

      # mdadm –grow –bitmap=internal /dev/md127

      On the note of speed – changed the mount options in /etc/fstab from default to these and it made some difference:
      noatime,data=writeback,barrier=0,nobh,errors=remount-ro

      Verify write cache is on in the drives using hdparm:

      -bash-4.1# hdparm -W /dev/sdc

      /dev/sdc:
      write-caching = 1 (on)
      -bash-4.1# hdparm -W /dev/sdd

      /dev/sdd:
      write-caching = 1 (on)
      -bash-4.1# hdparm -W /dev/sde

      /dev/sde:
      write-caching = 1 (on)
      -bash-4.1# hdparm -W /dev/sdf

      /dev/sdf:
      write-caching = 1 (on)
      -bash-4.1# hdparm -W /dev/sdg

      /dev/sdg:
      write-caching = 1 (on)

      Now, 4 of my 3Tb drives are the WD green drives. They were the largest I could get at the time, but were probably not the best choice in retrospect. Why? because they have a feature built in that spins them down in the firmware… on a normal drive hdparm -S ## would set the spindown time and the drive would use it.. but not the WD green drives… Net result is they constantly load and unload and I expect they will fail before they should and I’ll have to replace them one at a time. Already at twice the rated number of cycles (300k) and climbing

      # foreach bob ( c d e f g )
      foreach? smartctl -a /dev/sd$bob | grep Load
      foreach? end
      193 Load_Cycle_Count 0x0032 001 001 000 Old_age Always – 664822
      193 Load_Cycle_Count 0x0032 001 001 000 Old_age Always – 676213
      193 Load_Cycle_Count 0x0032 001 001 000 Old_age Always – 689929
      193 Load_Cycle_Count 0x0032 001 001 000 Old_age Always – 678514
      193 Load_Cycle_Count 0x0032 100 100 000 Old_age Always – 56

      Going to use this page to see how if I can disable the auto-park every 30 seconds and go to sleep. Then it should pay attention to the -S option.

      http://wiki.ubuntuusers.de/WD_IntelliPark (of course it is in german so translating using google is my friend)

      Read the man page on hdparm – the times are pretty wacky encoded. On the new drive I am setting it to 5.5hrs idle… thought about turning it off by setting 0, but will see how this goes.

      # hdparm -S 251 /dev/sdg

      /dev/sdg:
      setting standby to 251 (5 hours + 30 minutes)

      and I’ve had enough fun for the day

Leave a Reply

Your email address will not be published. Required fields are marked *


*