Server 7.2 - the long-waited server virtualization update
VMware ESXI 6.7 is now installed as a bare-metal hypervisor, replacing the bare-metal FreeBSD operating system. Boots off of an SSD, bootssd with VMFS:
The SSD stores the storage VM, napp-it, running napp-it, an OmniOS (or OpenIndiana, but I'm using the OmniOS 151028 version, both based on OpenSolaris). PCI-e passthrough is configured in ESXi to allow this VM direct access to LSI Logic LSI00344 9300-8i SGL SAS 8Port 12Gb/s PCIE3.0 HBA Controller Card, where three new 6TB drives are connected:
...configured in a ZFS pool named tanksix, with ZFS filesystems vmdata and userdata. Both filesystems are shared over NFS, vmdata to VMware ESXi for storing other VMs, and userdata to the VMs themselves.
The existing FreeBSD installation was virtualized using the steps described in this post: Converting a ZFS-based FreeBSD installation to a UFS image for virtualization. Long story short, since ZFS is now used at a lower layer underneath the VMs, I converted my root filesystem and other mount points besides /usr/home to a UFS disk image, converted to a VMDK, and imported into VMware. Mounted /usr/home via NFS from the napp-it tanksix/userdata share. PCI-e passthrough was enabled for the WAN interface of the Intel Gigabit NIC. Mostly worked as-is, only a few minor changes needed:
12GB of RAM is allocated to the napp-it VM, the most because ZFS aggressively caches in memory. Less is now needed on the FreeBSD VM and other specialized VMs.
I used to run all services on FreeBSD, but now that we have virtualization infrastructure to play with I moved the game server to DragonFly BSD 5.6.1. The df VM is stored on the SSD for fast access, but mounts part of the game data over NFS. Now this game server runs in its own operating system, instead of a FreeBSD jail. Other virtual machines running Windows 10, OpenBSD 6.5 are also installed for various experiments.
This new setup using virtualization has now finally achieved what I had in mind when purchasing the server-grade hardware six years ago.
Server 7.1.4 - another minor update, the calm before the storm, preparing for bigger changes ahead. Moved each of the 4TB hard drives to make room for new disks. Removed two old 1TB drives I had installed for some reason but left disconnected, and moved two of the 4TB drives from the upper bracket near the power supply to the lower triple-slot 5.25" bracket in the front.
This was easier said than done, as accessing this bracket required removing the Noctua NH-D15s CPU fan and heatsink first, removing the bracket, installing drives, then cleaning and reapplying thermal paste and reinstalling the CPU heatsink and fan. But now the more accessible drive bays are ready for new disks.
The third 4TB drive moved from the iStarUSA TC-ISTORM8 iStorm8 HDD Heat Sink Cooler (2017) to the older Kingwin KW1 drive cooler plus fan, both in 5.25" bays. Swapped their positions so the 4TB drives are closer to each other. I had stopped using the Kingwin because the fan was nosiy, but blowing it out with compressed air resolved this issue. However, the removable SATA passthrough is broken, so I removed the PCB, the fan, and left the drive inside the Kingwin drive cooler bay without any of the circuitry.
To sum it up, the new drive configuration:
All 3x4TB drives now connect through the HBA card, with the same SFF-8643 SATA cable, for simplicity. The new drives will use the other port from the SFF-8482 SAS cable, so I can quickly conveniently disconnect and reconnect the set of old and set of new drives during migration.
As part of this preparation I attempted to flash the LSI Logic LSI00344 9300-8i SGL SAS 8Port 12Gb/s PCIE3.0 HBA Controller Card (SAS 9300-8I HOST BUS ADAPTER) from IT mode to IR mode, intending to use hardware RAID, following the reverse of Servethehome: How to flash a LSI SAS 3008 HBA (e.g. IBM M1215) to IT mode. Discussion at Can I flash IR firmware onto an LSI 9207-8i HBA that currently has the IT FW implies this is possible at least for similar cards, "crossflashing" to hardware-RAID-capable firmware. So I tried flashing from the EFI shell, but Broadcom's knowledge base 1211161501344: flashing firmware and BIOS on LSI SAS HBAs confirms IR to IT or vice versa requires flashing under DOS. Booted into FreeDOS, but failed with ERROR: Failed to initialize PAL. Exiting program, which Broadcom says indicates missing BIOS32 functionality in the motherboard. In fact, they specifically call out "Motherboard used: Any except SMX (Supermicro X9 motherboards)", but that is what I have. Long story short, I'm stuck in IT mode for now, unless I want to buy/borrow another motherboard temporarily to reflash the 9300-8i to IR, or buy a different hardware-RAID capable card (9311-8i supporting RAID levels 0, 1, 15, and 10, or 9361-8i also supporting 5, 6, 50, 60, not considering the 92xx 6Gbps/PCIe series, and the 94xx tri-mode SAS/SATA/NVMe including 9460-8i and 9440-8i are probably overkill, although fairly affordable on eBay). This isn't necessarily a bad thing however, hardware RAID seems to be falling out of favor in favor of IT mode, direct passthrough.
Finally installed a brand new video card (also had to be removed to access the bottom drive bracket), an ATI Radeon HD 5770 1GB PCIe x16. Takes up two slots next to the CPU cooler and SAS card. Not the most high-end GPU, but it is something. FreeBSD also supports it according to their AMD GPU compatibility matrix: AMD Radeon HD 5770, Evergreen / Juniper, minimum FreeBSD versions 9.3 or 10.0, and 11.2+ 'use "radeonkms" kernel module via drm-kmod port'. I may end up using PCI passthrough (VT-d) to access this card in a Windows virtual machine instead. For now, it serves as the server console. Next stop: server virtualization.
Server 7.1.3 - replaced the stock Intel CPU cooler with a Noctua NH-D15s:
This cooler works amazingly well, even with the included low-noise adapter installed, but future improvements are possible: rotating the backplane 90º to allow the CPU fan cable to reach without extending with the low-noise adapter (requires removing the motherboard), replacing the thermal compound (instead of the included NT-H1), or adding a 2nd CPU fan with the included clip. Extra case fans could also be added, but the current cooling setup seems adequate. At the time of this writing, the core zero temperature is a mere 29.1ºC.
Server 7.1.2 - as promised, an important (though minor) upgrade: adding a SAS (Serial Attached SCSI) card, for improving hard drive performance and future expansion options.
The 2015 hard drive installation had a motherboard-imposed performance limitation, since only two ports are SATA-3 6Gb/s, and the other four are SATA-2 3Gb/s, but we have a three-way mirror. All of the disks support SATA-3, but the performance can't be achieved in this configuration. To improve this, purchased these items:
The X9SCA motherboard has quite a few slots, including both PCI-Express 2.0 and 3.0. The LSI card supports PCI-E 3.0, x8, which will fit in any of the PCI-E slots. The 2.0 motherboard slots are physically x8, but only have x4 lanes; however, the card will work there, albeit at a slower speed (could do this if the one 16x PCI-E 3.0 is used for a graphics card, in the future). To run at maximum speed with 8 lanes, the 3.0 slot is required. That said, here is the new motherboard slot configuration, from bottom to top:
In order to cool the SAS card, the "Titan Adjustable Dual Fan PCI Slot VGA Cooler" is inserted in the slot next to it. This card has two fans, fed to a splitter cable, and a fan speed adjustment control exposed on the outside of the case, which then connects internally to the motherboard FAN4 header.
In /boot/loader.conf, added mpr_load="YES". dmesg shows the driver is loaded:
port 0xe000-0xe0ff mem 0xdfc40000-0xdfc4ffff,0xdfc00000-0xdfc3ffff irq 16 at device 0.0 on pci1 mpr0: IOCFacts : mpr0: Firmware: 15.00.02.00, Driver: 09.255.01.00-fbsd mpr0: IOCCapabilities: 7a85c
SAS is backwards-compatible with SATA, so the SFF-8643 to 4xSATA cable can be used initially to achieve 6Gb/s performance with my existing drives. I connected the topmost drive (previously ada2, connected to motherboard SATA port 4) to P1, where it shows up as da0, and the middle drive (previously ada1, connected to 2) to P2, where it shows up as da1, both connecting to the SFF-8643-to-4xSATA cable plugged into the bottom port of the new SAS card. To avoid putting all the eggs in one basket, I left ada0 on the motherboard SATA port, since it is connected to one of the two that is SATA 3.x, 600 MB/s. All said and done, all three drives are now SATA 3.x, 600 MB/s, full speed:
ada0: 600.000MB/s transfers (SATA 3.x, UDMA6, PIO 8192bytes) da0: 600.000MB/s transfers da1: 600.000MB/s transfers
The system booted no problem after making these changes, the zfs pool automatically detecting the moved drives. A curious artifact of the new setup is that zfs pool names the disks prefixed as diskid/DISK-%20%20%20%20%20%20%20%20%20%20%20%20..., because the SAS card reports serial numbers with a space-padded prefix.
To manage this controller, installed the sysutils/sas3ircu port:
$ sudo sas3ircu list Avago Technologies SAS3 IR Configuration Utility. Version 11.00.00.00 (2015.08.04) Copyright (c) 2009-2015 Avago Technologies. All rights reserved. Adapter Vendor Device SubSys SubSys Index Type ID ID Pci Address Ven ID Dev ID ----- ------------ ------ ------ ----------------- ------ ------ 0 SAS3008 1000h 97h 00h:01h:00h:00h 1000h 30e0h SAS3IRCU: Utility Completed Successfully.
While this performance doubling improvement is great, the real purpose of this upgrade was to pave the way for new drives. All of these drives have their warranty expiring September 21st, 2020, so it is time to start thinking about replacements soon, and SAS drives are now an option.
Therefore, the next upgrade could be to use SAS drives, connecting with the SFF-8643 to SFF-8482 cable (which I ordered proactively, but have not yet used). The 9300-8i has 8 internal ports, four each per cable, so both could even be used at the same time: old SATA and new SAS drives, each with their own cables providing up to 4 drives each. SAS-3 supports up to 12Gb/s, so ultimately a new array connected to this HBA could be up to 4x faster than the current configuration, with plenty of room for future upgrades!
Server 7.1.1 - a failing hard drive cooling fan 5.25" drive bay for one of the SATA drives has been replaced with the solid state iStarUSA TC-ISTORM8 iStorm8 HDD Heat Sink Cooler. Completely silent! More upgrades planned soon.
The server memory was also upgraded to the maximum, 32 GB, purchased on July 5th, 2017: Kingston Technology ValueRAM 32GB Kit of 4 DDR3 1600MHz PC3 12800 ECC CL11 DIMM with TS Server Workstation Memory for $355.88.
Server 7.1 - a new clean install of FreeBSD 10.2, and several hardware updates:
|Storage||3 x Seagate Enterprise NAS ST4000VN0001 4TB 7200 RPM 128MB Cache SATA 6.0Gb/s 3.5" Internal Hard Drive Bare Drive|
|Storage||SAMSUNG 850 EVO 2.5" 250GB SATA III 3-D Vertical Internal Solid State Drive (SSD) MZ-75E250B/AM|
|Case Fan||SilenX EFX-09-12 92mm Effizio Quiet Case Fan|
|Power Supply||SeaSonic Snow Silent 750 750W ATX12V / EPS12V SLI Ready 80 PLUS PLATINUM Certified Full Modular Active PFC Power Supply|
FreeBSD 10's ZFS on Root mirrors the 3 x 4 TB disks in RAID1 (replacing the 2 TB disks using UFS and gmirror), and the 750W 80 Plus Platinum SeaSonic Snow Silent PSU replaces the X650 650W 80 Plus Gold. The 12 dBA case fan with fluid dynamic bearings replaces an aging 92mm sleeve fan.
The SSD is used to provide the ZFS L2ARC cache. Other hardware upgrades are possible (replacing all magnetic hard disks with solid state drives, upgrading from 8 GB ECC DDR3 1333MHz memory to up to 32 GB 1600MHz (maximum supported by the Supermicro X9SCA-F motherboard), or a different Intel Xeon E3-12xx v2 Ivy Bridge processor besides the current E3-1230 v2 (8M Cache, 3.30 GHz), but this is primarily a software upgrade (FreeBSD 9 to FreeBSD 10, clean install on new disks).
Several new changes to this server recently. Updated to FreeBSD 9.1, geographically relocated, and is now proxied through Cloudflare. Cloudflare also now serves the DNS for the jeff.tk domain -- previously was using xname for many years, then switched to Namecheap Free DNS, but ultimately went with Cloudflare for their CDN services. Both Namecheap and Cloudflare support dynamic updates using a fork of ddclient. Also made another pass through much of this website, cleaning up old references, fixing broken links. Besides web (still using lighttpd), this server now hosts a few additional services:
Mail is hosted using postfix and dovecot, with Rollernet for inbound SMTP (MX records). dnsmasq provides DHCP and caching DNS, rtavd for SLAAC autoconfiguration of LAN clients. This server now serves as a router as well, instead of the 2 TB 4th generation Time Capsule used for local backup, which now operates in bridge mode. CrashPlan is used for remote backup.
The next-generation IPv6 protocol is finally here, with this server and the network it serves now natively IPv6-enabled (/128 for the server and /64 for the LAN). You may already be browsing this website over IPv6, if you are curious check your IP here.
Completely new server hardware installed at the end of last year (2012), the 7th iteration of this server. This is the first version of the server using server-grade components:
|Motherboard||SuperMicro MBD-X9SCA-F-O - LGA1155 Intel C204 Chipset ATX Motherboard, DDR3, SATA 6Gb/s, VGA, PCIE, Gigabit LAN|
|Processor||Intel CPU BX80637E31230V2 Xeon E3-1230v2 3.30GHz 8M 4Core/8Thread LGA1155|
|Memory||Kingston KVR1333D3E9SK2/8G ValueRAM 8GB ( 4GB x 2 ) 240-pin pc3-10600 DDR3 1333mhz ECC desktop memory module|
|Power Supply||Seasonic X650 650W Gold Retail Power Supply, 80Plus Gold|
|Storage||2 x Western Digital WD2002FAEX Cavier Black 2TB 7200RPM, 64MB Cache, SATA 6.0Gb/s 3.5" Internal Hard Drive|
New server software installation: FreeBSD 8.1-RELEASE amd64, on a new pair of 2 TB Western Digital hard drives in RAID-1 using gmirror. I call this "server 6.2". Lighttpd has been migrated and the site should functional as before, but please let me know if you find any problems.
The MediaWiki installation at /wiki/ has not yet been migrated.
Went through all the pages and fixed any broken symlinks I could. Much more of the site should now be accessible than before (its about time).
I'm also uploading this site to the SDF mirror as soon as I finish this entry.
The primary site, jeff.tk, may not be accessible until I change ISPs (my current ISP seems to block port 80).
Big news. I replaced the fourth generation of the server (dragon). The fifth generation, named core, is a Core 2 Duo machine with a Gigabyte GA-965G-DS3 motherboard, 1 GB DDR675 Corsair RAM, 4 x 200 GB RAID 0+1 (for 400 GB total data storage) and a 30 GB 10,000 RPM SATA drive for the OS.
I moved this site to use lighttpd instead of Apache. Lighttpd is gaining popularity and I felt it was time for a change.
I'm reworking the script infrastructure used to generate this site. Its very archaic, having been used since I had this site on my first personal computer, a K6-2 300 MHz running Windows 98. I think it is about time for an upgrade.
Currently I've gone through the first few pages of this site linked from the index, ending at p2p/search.html. The other pages have not been looked at yet, so don't expect anything to work. Check the mirror if something is broken, but if that is broken too...that's bad luck.
Summer again, finished first year of college. Verizon upgraded our DSL to 3Mbps download, 768Kbps upload (93KB/s), so this site should be faster. I plan to work on SUMI over summer, although I am taking courses at a community college so my time will be limited.
Its summer now, and I'm going to college very soon. My SDF MetaARPA membership (for xyzzy.freeshell.org) will expire soon, which means I'll have half the bandwidth quota there. Fortunately, jeff.tk has been paid for up to 2013, so I'll try to keep this site up until then.
Several new projects have been uploaded.
Since the last update, this server has gone through several changes. New WD hard drive (main partitions are on a WD RAID 1, as they have been for a while) and a portable USB2 drive, filled with FreeBSD, this web site, all my projects, a backup of my 2003 laptop hard drive, an earlier copy of my laptop's hard drive, a backup of a Maxtor hard drive used on my first computer, several images of a non-working Western Digital drive (hoping to recover it some day--has files from 1997!), a partial Project Gutenberg mirror, all my other files, and a ton of free space. This site is still served off the RAID 1. But more importantly, my Verizon DSL has been upgraded to 384Kbps (~40KB/s) upstream! So this site should be much faster than it was at a pathetic 128Kbps (16KB/s). It might even be usable.
A couple major updates. Fixed all known broken links on main page.
Last week the previous record of 21 days of uptime was broken. It is now at 26 days, 14 hours, 2 minutes. I've added an uptime CGI script so one can check the server uptime themselves, if curious. (Now removed)
This website has been left inactive for quite some time, I'm beginning to revive it; there have been a few major changes. To start off the new year, I have converted most dates into ISO standard YYYYMMDD format; additionally, the source files for this site now reside on my primary computer and will be linked to live, up-to-the-second versions. I am experimenting with a new building tool which should make updates go much more smoothly, and generally streamline this website development.
Received notices regarding payment of jeff.com.kg. Initially free, the KG NIC is now charging. I'm opting for jeff.tk instead. The TK NIC gives .tk domains away for free, although you only can have a URL framed inside a page on their server, slowing down access excessively. For a registered domain, TK NIC charges $10/year, minimum 2 years, I'm considering purchasing one. On the plus side, jeff.tk is 63% shorter in length than jeff.com.kg, so the extra $4 ($10 for TK, $6 for com.kg) is well worth it. For the moment, everything but HTTP will be unaccessable via jeff.tk.
After an uptime of 20 days, the server was powered down for several reasons. We tried using a Plextor 1210A CD-RW burner, and it was in fact supported by FreeBSD, but for other reasons it was removed. The old 48x Memorex CD-ROM was replaced with a 52X Element Cd-ROM. More importantly, I purchased a Thermaltake Volcano 7+ Copper heat sink fan for $30 at PCClub with instant rebate, it performs great. Set on the low RPM setting (about 3000RPM), it's much quieter than the old Coolermaster, and keeps the CPU around a steady 38 celsius.
The server was powered down yesterday (02/02/2002) afternoon due to incorrect power managment settings in the BIOS regarding powering off the hard drives. Apparently, the way I configured ACPI does not work with RAID.
On the 15th we got the new parts for Server III. The detailed specs are available from the system rigs profile at Anandtech. In short:
|CPU||AMD Athlon XP 1600+ (1.4GHz)|
|System RAM||128MB PC2100 DDR|
|Motherboard||Soyo K7V Dragon+|
|Sound Card||Onchip CMedia CMI8738|
|Ethernet Card||Onchip VIA Rhine II 10/100|
|Video Card||3Dimage 9750 (975) 4MB|
|RAID||Onboard Promise Fasttrak-100 Lite (RAID 1)|
IBM Deskstar 60GXP 60GB 7200RPM
IBM Deskstar 60GXP 60GB 7200RPM
|Power Supply||EnerMax Whisper EG365P 350W|
|Hard Disks||Quantum Fireball 2.5GB (swap space)|
|CD-ROM 1||48x Memorex|
Hard disk setup:
/dev/ad0s1b 2.5GB SWAP Primary Master - Quantum Fireball 2.5GB /dev/ar0s1a 1GB / \ /dev/ar0s2e 37GB /home \ /dev/ar0s1g 0.5GB /tmp RAID 1 2/60GB IBM Deskstar 60GXP's /dev/ar0s1e 16GB /usr / /dev/ar0s1f 1.3GB /var /
SDF upgraded their bandwidth from 1.5Mbps DSL to a 155Mbps multi-homed OC-3, and installed several new machines. Thanks to EveryDNS, I was able to point www.jeff.com.kg to the mirror selection page hosted on SDF, while jeff.com.kg still goes to my DSL line.
I plan to point www.jeff.com.kg to the SDF mirror once I find a suitable DNS provider. MyDomain does not let you separate www.jeff.com.kg from jeff.com.kg (for example), they even admitted this is not possible to me through e-mail correspondence, yet I was able to do the same with web.jeff.com.kg. Right now jeff.com.kg's DNS is hosted on Hammernode, which allows direct editing of DNS records although pointing www.jeff.com.kg to my SDF account would require giving SDF $168 annually, according to their vhost plan.
I also am considering building a brand new server to replace the Pentium 100MHz, possibly with hardware RAID, 60GB disks, and an Athlon XP. With the cash I received for Christmas my dream server may become a reality.
Modified Sun Mar 25 08:56:28 2007
generated Sun Mar 25 08:56:33 2007