When we moved from being primarily focused on innovation to also developing a demo platform, our developers began to work with very different frameworks and libraries. As our interactions with more libraries and frameworks grew, we faced dev-setup issues with our monolithic architecture, including:
- Installing and supporting multiple libraries and frameworks. Our developers were installing and maintaining multiple libraries and frameworks locally that they would never ‘directly’ interact with for their assigned tasks.
- Software versioning. It’s a project manager’s nightmare keeping everyone in different teams on the same software versions to deal with inconsistent run-time environments.
We began to consider moving to a microservices architecture, which would allow us to shield our developers’ working environments from installing various (most if not all) libraries and software applications.
Industry literature and Rahul’s personal experience at North Carolina State University pointed to a shift away from monolithic architecture to a microservices architecture because it’s more nimble, increases developer productivity, and would address our scaling and operational frustrations.
However, moving to a microservices architecture forced us to address the architecure’s requirements, namely, how do we access these services – using in-house servers or through third-party hosted platforms?
We first considered moving straight to cloud services through well-known providers such as Google Cloud, Amazon S3, and Microsoft Azure
,because all the cool kids are doing that. Cloud computing rates have dropped dramatically, making hosted virtual-computing attractive.
However, at the time, we were still exploring microservices as an option and were not fully committed. Also, we had (and still have at the time of writing this post) to do a lot of homework before transitioning to the cloud. When we added security and intellectual property (IP) concerns to the mix, we decided on an in-house solution for the time being.
This blog post is about our process of determining which servers we would use to host the microservices.
Here we go
To quickly get up-and-running, we repurposed four older and idle Apple Mac Pro towers that were initially purchased for departed summer interns. We reformatted the towers and installed Ubuntu Server 16 LTS to make the future transition to the cloud easier because most cloud platforms support some version of Linux (Ubuntu) out-of-the box.
The towers featured:
- Intel Xeon 5150 2.66 GHz dual-core processors with 4 MB cache
- 4 GB PC2-5300 667 MHz DIMM
- NVIDIA GeForce 7300 GT 256 MB graphics cards
- 256 GB Serial ATA 7200 RPM hard drives
These towers were fairly old – the Xeon 5150 processors were released in June 2006.
Moving to a microservices architecture immediately solved many of our issues. First and foremost, it allowed us to separate our development environments into individual services.
For example, our AI engine for logic queries could work independent of our program-analysis engine and our text-mining work. This was incredibly helpful because, for example, developers working on program analysis who did not directly dealt with the AI engine didn’t have to install and maintain AI-specific libraries, and vice-versa for AI developers and program analysis tools.
Now, each team simply interacts with an endpoint, which immediately improved our productivity. More on this revelation in a future post.
As we continued to implement the microservices platform, we were pretty happy with the results. We were not yet committed to moving to the cloud when our servers started showing signs of their technological age – performance lags, reliability issues, limited upgrade capabilities, and increasing power consumption
(not to forget heat, which was not a bad thing in colorado winters). The limited amount of DIMM, limited cost-effective upgrade capabilities, and constant OS crashes hampered our efforts.
For the next ‘phase’ of our microservices evolution, we decided to acquire performant hardware specifically geared for hosting microservices.
Phase Change is a small startup with limited funding, so we had to purchase equipment that would meet our needs within a budget. Like many 'cool' startups, we are a Mac shop, so we naturally gravitated towards using Mac mini servers. We were already using Mac minis for file hosting, and there are plenty of websites detailing how to use them, such as MacStadiu’s Mac mini Q&A.
conducting random Google searches extensive online research, we decided our best option was not the Mac mini with OS X Server, but the original Mac mini model. The Mac mini with OS X Server features an Intel Core i7 processor and dual 1 TB Serial ATA drives, but Apple stopped offering the mini with OS X Server in October 2014.
So, we considered the next best thing, mid-level original Mac minis that included:
- Intel i5 3230M 2.6-3.2 GHz processors with 3 MB cache
- Intel Iris Graphics cards
- 8 GB 1600 MHz LPDDR3 memory
- 1 TB 5400 RPM hard drives
- 1000 Base-T Gigabit Ethernet support
The Mac mini form factor – 7.7 inches width by 1.4 inches height and 7.7 inches depth – and power consumption – 85 W maximum continuous power – were also appealing. The retail base price is $699. The cost-effective modern processors and increased memory were the most important factors in our consideration, and the tiny little Macs would integrate well into our 'cool' Mac company environment.
We were all set to move on the Mac minis until we found Russell Ivanovic’s blog post, which revealed that the Mac mini product line hasn't been updated since October 2014 – over 2.4 years, but Apple is still selling them at new-computer pricing. So much for the minis. Aargh!
Luckily, we didn't have to start at square one this time around, because Ivanovic's blog post revealed what he bought instead of the mini – an Intel NUC Kit mini PC.
asked Siri to do the math crunched the numbers and found that the NUC was a reasonable Mac-mini replacement. The Intel NUC Kits are mini PCs engineered for video gaming and intensive workloads. The base models include processors, graphics cards, slots for RAM and permanent storage devices, peripheral connectivity ports, and expansion capabilities, but we upgraded our NUC6i7KYKs to include:
- Intel Core i7 6770HQ 4.0-4.65 GHz quad-core processors with 8 MB LC cache
- Intel Iris Pro Graphics 580 cards
- Crucial 16 GB (8 GB x 2) DDR4 SODIMM 1066 MHz RAM
- Samsung 850 EVO 250 GB SATA III internal SSDs
- 1000 Base-T Gigabit Ethernet support
The following table presents technical comparison between the old Mac towers, the Mac mini, and the Intel NUC Kit.
|Mac Pro Tower||Mac mini||Intel NUC Kit NUC6i7KYK||Comments|
|Base Price||$200-$300||$699||$569||Mac tower has been discontinued but you can still buy preowned hardware. We chose the mid-level Mac mini ($699) for comparison fairness.|
|Processor||Intel Xeon 5150 2.66 GHz dual-core||Intel Core i5 3230M 2.6-3.5 GHz dual-core||Intel Core i7 6700K 4.0-4.65 GHz quad core||Processor comparisons Xeon 5150 v. i5 3230M Xeon 5150 v. i7 6700K i5 3230M v. i7 6700KYou can update the Mac mini to an i7 processor for $300.|
|Graphics||Nvidia GeForce 7300 GT||Intel Iris Pro HD 5100||Intel Iris Pro 580||Graphics card comparisons Nvidia GeForce 7300 GT v. Intel Iris Pro HD 5100 Nvidia GeForce 7300 GT v. Intel Iris Pro 580 Intel Iris Pro had 5100 v. Intel Iris Pro 580Apple hasn't officially released info on the Mac mini's exact graphics chipset, so we used specs from EveryMac.com for comparisons.|
|RAM||4 GBPC2-5300667 MHz||8 GBLPDDR31600 MHz||Crucial 16 GB SODIMM DDR3L 1066 MHz||Out-of-the-box NUC Kits do not include RAM. We installed 16 GB DDRL3 SODIMMs for $108 each.The Mac mini is upgradeable to 16 GB for $200.|
|Storage||256 GBSerial ATA 7200 RPM||1 TBSerial ATA5400 RPM||Samsung 850 EVO 1 TB SATA III internal SSD||Out-of-the-box NUC Kits do not include internal storage. We installed 250 GB SSDs ($109 each) for a good performance/capacity mix, but use a 1 TB SSD here for comparison fairness.You can upgrade the Mac mini to a 1 TB Mac Fusion Drive (1 TB Serial ATA 5400 RPM + 24 GB SSD) for $200.|
|Comparison Purchase Prices** (per unit) **||$200 - $300||$1,399||$997||** Mac mini upgrades:Intel Core i7 processor ($300); 16 GB LPDDR memory ($200); 1 TB Fusion Drive ($200)NUC Kit Config Upgrades:** 16 GB DDR3L memory ($108); 1 TB SSDs ($320)|
Our NUC Kits total price ended up being $786 per unit with the 16 GB SODIMM DDR3L RAM and 256 GB SSDs. If we had opted for 1 TB SSDs to match the standard capacity in the mid-level Mac mini, our price would have jumped to $1,199 per unit.
We chose the Intel NUC Kits over the Mac minis because of the NUC Kits' updated technology and overall better performance for the price. Putting together and installing Ubuntu Server 16 LTS on the NUCs was very straightforward.
Both units are fully configured and have been in full production operation for a few weeks. We haven't encountered any issues. I'll divulge more on how they perform over time with different microservices and workloads in future blog posts.
PS: We still looooove Mac towers and we are currently using them as test beds. That will also be the subject of a future blog post.
Cross-posted from Phase Change Software blog.
Guest Writer Todd Erickson is a tech writer at Phase Change. His experience includes content marketing, technology journalism, and law. You can reach him at email@example.com powered by Disqus
- Team Values
- J.A.R.V.I.S. for Developers
- Screens, Windows, Distractions, Focus, and Work
- 'Mini-Servers' for 'Micro-Services'
- A few Billion Lines of code Later: Using Static Analysis to Find Bugs in the Real World
- API Mapping using Text Mining
- Tricorder: Building a Program Analysis Ecosystem
- Creating this blog
- When, How, and Why Developers (Do Not) Test in Their IDEs
- Test Post!
- Hello World!