(gs) Updates
We have some good stuff this week. Internal testing of two new pieces of tech is complete – Fiji Agent (automated cluster migrations) and MySQL 5 for Clusters 1-3. Our efforts to get (gs) Cluster.05 online are coming along nicely. We are making preliminary plans for a GRID-wide software update that will give our customers a refresh of popular software package versions. Various bug fixes and improvements to issue monitoring/response mechanisms are the icing on the cake. Keep reading for more details!
Storage Transition
In the past two weeks we’ve made mountains of progress on our transition to Gen 2 storage. Much of the work has been getting new hardware in place. This part is now done and we’re ready to start ramping up the pace. Even as we were working to get new hardware online, we migrated an additional 500 customer sites to the 2nd Gen storage over the past two weeks. This additional hardware will accelerate the rate of site migrations considerably.
Fiji Agent
We’re happy to announce that the first iteration of our Fiji Agent technology is now complete. This week is dedicated to finalizing all internal testing. A big thanks to all of our customers who opened a support request regarding this new tech, in addition to being so patient. The best way to be notified when Fiji Agent is available is to make sure you have a support request open stating your wish to be on the Fiji Agent notification list.
MySQL 5 BETA next week!
Parallel to the Fiji Agent beta, we have been busy with our final touches on providing MySQL 5 for all clusters. Once again, thanks for being patient with this one. We will be sending out emails next week to BETA applicants to start the testing. These will be processed in the order received.
Cluster.05 progress
In addition to Fiji Agent and MySQL 5, much of the focus in the past few weeks has been on getting our next Cluster online. We’re happy to report that things are looking very good. This new cluster will be based on our 2nd Generation storage architecture, and will meet the growing demands of our (gs) Grid-Service product.
New Anti-SPAM weaponry
In our last update we touched on what we’re doing to help fight SPAM on the GRID. This change is technical in nature, so forgive us for the geek-speak! The biggest customer-facing change relates to blocking outbound “direct to MX” connection attempts from our web nodes. Over 99.75% of our entire (gs) customer base will remain unaffected. Essentially, this will only block scripts that are designed to act as their own MTA . Most scripts already do the right thing, and relay through (mt) mail servers by default. As we continue collecting valuable feedback in our (mt) User Forums, we’d like to clarify a few points here:
- This change has NOT been implemented yet.
- Sending mail from your email application will not be affected.
- Sending mail from inside a RoR or Django container will not be affected.
- Scripts that use ‘sendmail‘ will not be affected.
- The email sending mechanisms included with many 3rd-party applications will continue to function.
- Scripts that establish a direct MX connection to an outside mail server over port 25 will no longer function.
We encourage you to drop on by the discussion in our (mt) User Forums for more details. We’ll set a date for this change in a future announcement.
GRID-wide software package update
Our GRID nodes use the Debian Linux Operating System. We picked this distribution because it provides a stable environment and mature software packages, though they may be slightly behind in version numbers. The last major update was just released less than two weeks ago! We are currently in the preliminary planning stages for updating all (gs) Cluster nodes to Debian “Lenny“. This essentially means that many popular software packages, such as subversion and python, will get a much-deserved version bump. As this is a major change for the (gs) product, it must first undergo internal testing before deployment to our production clusters. We’ll have more info about this in a future blog post.
Other News
We implemented a few other changes that contributed to improvements in service health and consistency.
Network driver patches – Stability Enhancement
In a recent Scheduled Maintenance our engineers applied a patch to the Nvidia Ethernet Controller driver on all (gs) Storage Segments. This patch fixes several outstanding bugs that could have potentially impacted the availability of the (gs) platform.
Smarter monitoring of (gs) Storage Segments – Issue Isolation
Recent improvements to our monitoring and diagnostics tools are giving us greater insight — We now have much more visibility into potential problems. Having enough time to take proactive measures equals a platform with much higher availability. Future applications of this new tech will include even more automated “self-healing” properties.
That’s all for this update! Thanks for reading!