Last week, I got the news that my grandfather had died. The news was very bittersweet. I had talked to him a few weeks before, and I could tell he was just so tired of life. He had always been a very active person, but the last few years, his body just failed to keep up. Especially after my grandma died a little over a couple years ago, there just wasn't much for him to do or anyone to do it with. (Not that he was totally alone or that no one visited — he had 24/7 care, and family in the area that would check up on him often.) So, while I'm very sad to have him go, I'm happy he can be reunited with Grandma and other friends that have gone before him.
I'm also very fortunate that I was able to make it out there — and even more fortunate that I was able to make it home.
I have a cousin in the area who was very nice to allow me to come in and give me a place to stay. In talking with him as I made my travel plans, the most convenient flight out from Denver to San Francisco and back was with Delta. Despite having a layover in Salt Lake both ways, the times worked out decently well for the price.
The flight out was pretty uneventful, which in the context of travelling is a good thing. The flight back, however, is where things got interesting. My flight out was scheduled late in the day of the funeral, and we planned to go from the funeral to the airport for me to catch my flight.
Early in the morning, as we were getting ready to leave my cousin's house for the funeral, I got a text message from Delta indicating that there was a change to one of my flights. I checked using the Delta app (once again entering my confirmation number, since the app forgot it — by this time, I had it memorized, which would become very important). The flight from SFO to SLC was unchanged, but the connection from SLC to DEN had been delayed from that day to 6am the next morning, giving me an overnight layover in Salt Lake. Now, the convenient thing about the Delta app is, when you have a delay like this, it gives you a list of alternatives that you can re-book for no extra cost. Scrolling through the list, though, only showed me a list of flights that started the next day. I didn't want to impose on my cousin for another day (plus have to take another day off of work), so I kept scrolling until I finally saw an option that let me leave the same day. It started with the same flight from SFO to SLC, but the next flight was from SLC to Los Angeles, with about a 5-hour layover there until a 2:30am flight back to Denver. It wasn't great, but at least it got me home.
I clicked the button to book the flight. Then, I looked over my itinerary. SFO to SLC, SLC to LAX, and then LAX to DEN, departing at... 7pm the next day? My 5-hour layover in LAX had somehow turned into a 24-hour layover. And the option to reschedule had disappeared from the app.
With the app failing me, I decided to call the customer service number and talk to a human. They put me into a queue with a 20-minute wait, and the recording gave me the option to have them call me back instead of waiting on hold the whole time. I selected that, and we set off on the road for the funeral.
At this point, it's important to know that I have T-Mobile cell service. It's also important to know that, in the area where my cousin lives (and, as it happens, over most of the route between his place and where the funeral was scheduled), T-Mobile's coverage can charitably be described as "not great". Sure enough, about 25 minutes later, my phone alerted me to a voice mail message, which I played back and heard the automated attendant trying to connect me to an agent. They called back as promised, but I had no signal at the time.
So I called again, this time remaining on hold for the now-25-minute wait time, listening to really bad electronic hold music broken only by a recorded message telling me how I could make changes to my reservation using the Delta app. (Yeah, it was the app that got me into this mess in the first place, thanks.) After about 25 minutes, as promised, an agent picked up. I was able to give him my name and confirmation number, and I heard him comment on how strange my reservation looked now. "Let me see what I can do," I heard — right before we entered another T-Mobile dead zone and the call dropped.
This time, I borrowed my cousin's phone (connected to AT&T service, which had better coverage in that area). After another 25-minute wait (which included several more recorded messages telling me how great the Delta app was), I was able to talk to an agent, who noted that Denver was due for some bad weather, causing some flight complications. To his credit, he was able to find me a flight. It was on a New Zealand flight operated by United (shrug), with direct service from SFO to DEN. It was scheduled to leave San Francisco a little later (giving us more time to get there from the funeral), but arriving home at nearly the same time. Sounded like a win-win. After we got off the phone, I checked the Delta app, and it showed my new flight (though, since it was a United flight, without the ability to check in or make a seat selection or any of the great features provided by the "great" Delta app).
After the funeral, as we're heading to the airport, I decide once more to check the app. It still showed me booked on the New Zealand/United flight, with no indication that anything was amiss. For some reason, I decided to do a quick Bing search (hey, I get Rewards points for using Bing, don't judge) on the flight, and it displayed the status: Cancelled. My cousin asked if I wanted to just return to his house, where I could start making calls, but at that point, I figured we could just continue to the airport, and I could talk to the agents there in person to figure things out.
Once there, I went to the Delta desk. I kind of expected the answer that they wouldn't be able to help me after passing me over to United, and in that, I wasn't disappointed. Of course the Delta and United desks were on opposite ends of the terminal, so I had a bit of a walk over to United.
I got a little worried at the United desk, when they couldn't find my reservation in their system, on any flight. Delta's reservation number wasn't helpful, but the ticket number was — once I found it buried under several clicks in that "really great" Delta app. (There was an additional snag in that the computer wouldn't let the agent do anything with my ticket, but he got that straightened out talking to his tech support — something about the ticket using my full middle name where he tried searching by just my initial.) After a few minutes, though, he was able to put me on a United flight straight from SFO to DEN. He handed me the boarding pass, which showed a departure time of about 1:30pm, about three hours prior. But, not to worry; that was just this flight's original time; it had been delayed until 5:30, which gave me a fair amount of time to get through security and to the flight.
It turns out, this flight's plane had some mechanical issues, so they had replaced it with another plane from Chicago that unfortunately wasn't going to get into San Francisco until after 4. Well, unfortunate for the people who were originally booked on that flight, but fortunate for me, I suppose. I kind of felt bad as I sat in the waiting area, listening to people complain about being there for four hours, while I show up less than an hour prior.
The plane from Chicago ended up being a little delayed; and, after we boarded, they had an issue with the door seal that required a maintenance crew to give it a once-over and fill out some paperwork, further delaying take-off. Then, while I didn't have a window seat and couldn't confirm this, it sure felt like we taxied around to every runway before finally taking off at a little after 6:30pm.
But finally, 2 hours later, despite a snowstorm that was just kicking into high gear, we touched down in Denver, and I was as good as home — at just about the same time I was originally scheduled on my first Delta flight from Salt Lake. As if nothing happened.
And good thing, too; the storm dropped snow all through the night, with nearly 500 flights cancelled at Denver the next day. If I hadn't made it home that night, it could've easily taken me a couple days to get home, right in the middle of the Thanksgiving travel rush.
I ran into an issue that appears to be caused by Microsoft attempting to protect me from myself. Although, truth be told, it wouldn't have been an issue if things were a little better designed.
Imagine, if you will, a SQL Server database with a table of transactions. One of the fields on this table is a CorrelationId. It's a text field that is populated by a different system to tie transactions together (for example, two sides of a transfer from one customer to another). This field always gets populated on new transactions; the uncorrelated ones will just be the only one with a given CorrelationId. However, this system is not new; it was converted to replace an older system that did not have a defined CorrelationId. So, although the five million or so transactions created by this system have a CorrelationId, there are 12 million "legacy" records that have a CorrelationId of NULL.
So, say, for a given transaction, you want to find all correlated transactions. In SQL Server, you might use a simple query like this:
And this would work, for the most part (except for legacy records, since SQL will fail to match on the NULL value — but we can ignore this for now). If you took this query into SQL Management Studio and looked at the execution plan, you'd see a nice thin line from the index seek on the CorrelationId, showing that it found and processed a tiny number of matching records, resulting in a very quick response.
However, if you were trying to do this programmatically from a C# application using Entity Framework 6, you might write some code like:
The problem is, in C# code, null values are equal to another; while in SQL, "null" is considered "unknown", and doesn't equal itself. (The theory is, you can't know if one "null", or unknown value, equals another "null"; so equality tests between "null" and "null" are false.) Instead of leaving it up to the programmer to explicitly code for this condition, Entity Framework "helpfully" writes the join clause that it gives to SQL server in this manner:
The extra check for IS NULL on both sides has two unfortunate side effects in this case:
- If the transaction is one of the legacy records, it will return a positive match on all 12 million other legacy records with a null CorrelationId.
- If the transaction has a CorrelationId, because of the IS NULL, SQL Server will investigate the 12 million null values in the CorrelationId index, resulting in a big fat line from the index seek in the execution plan, and a return time of a couple seconds or more.
The really annoying part is that there doesn't appear to be a way to stop this. Even if you explicitly add a check for a not-equal-to-null on your target table, Entity Framework still wraps the equality test with checks for IS NULL. The result is almost comical. For instance, adding txn2.CorrelationId != null either in the join statement or as a where clause, results in this (with contradictory statements highlighted):
Even trying to break up the work into two statements didn't help. This code:
Resulted in this SQL:
Granted, this is a really bad situation to be in to begin with. Indexes on text fields tend to perform poorly, and having such a huge number of null values in the index is likewise unhelpful. A better design would be to rip the text field off into another table, or somehow otherwise convert it into an integer that would be easier to index (something we've had to do in other tables on this very same project, where we've had more control of the data).
I'm willing to bet that Microsoft's translation goes completely unnoticed in over 99% of the cases where it occurs. And, if I had the time to make a design change (with all of the necessary changes to all points that hit this table, some of which I don't have direct control over), it could have been resolved without fighting Entity Framework. Even just populating all of the legacy transactions' CorrelationId with random, unique garbage would've solved the problem (though with a lot of wasted storage space that would've made the infrastructure team cry).
In the end, it was solved by creating stored procedures in the datbase to do correlated transaction lookups (where the behavior could be controlled and expected), and having C# code exectue those directly (bypassing EF6) to get transaction IDs. Standard Linq queries would then use those IDs, instead of trying to search the CorrelationId.
This whole exercise was prompted by a script that I had to run to get a bunch of data from a decent number of transactions. It took nearly eleven hours to complete, finishing close to 1am after I started it. If I had time to go through this debugging and implement the fix, it turns out I could've gotten it done in about a third of the time.
The news broke that the Church of Jesus Christ of Latter-day Saints has officially announced that it will no longer sponsor Boy Scouts of America troops. As a member of that church who currently serves in a calling as a Cub Scout den leader, and as a father of four boys — one of which just earned his Eagle Scout, and one who likely will within the next year — and being married to someone who currently volunteers as a Roundtable Commissioner for the BSA district, I have some thoughts.
First off, this has been a long time coming. The Church has members all across the world, and the Boy Scouts is an American institution. While Church leaders have often noted that the BSA is, essentially, the Young Men's organization within the Church, this has been completely unavailable outside of this country. We've heard rumors for a long time of the Church looking to institute something that would be available to members everywhere.
There are certain benefits to dumping the BSA from the Young Men's program. I've heard from numerous sources how the youth budget is often seen as unfair, with the Young Men getting far more allocation than the Young Women. However, the main reason for this is that the Church pays much of the registration fees, awards, and so forth required by the BSA — which is something that the Young Women do not have to deal with. The BSA also has many rules, regulations, and so forth that can be difficult for a ward full of unpaid volunteers to navigate. (One of the responsibilities my wife has is to help new scout troops and packs get through some of this — and the ones requiring help are not limited to LDS Church-sponsored troops.)
The move could also allow more freedom in Young Men's groups in their activities. Without having to follow strict rules and a proscribed list of activities and merit badges that boys are supposed to earn, it could be more easily tailored to the needs and desires of the people in the program (kids and leaders alike).
On the other hand, there are some things that would be a great loss. The structure and required activities are sometimes a benefit. When I asked my wife about her experience growing up in Activity Days (the program for girls the age of Cub Scout boys), she expressed her disappointment that it was little more than a cooking class. Her leaders were skilled in the kitchen and enjoyed doing that, so, without any real incentive to do anything else, that's what they did, week after week. Not being a very outdoorsy person myself, if I didn't have the Cub Scout requirements to drive me, I would likewise have very little incentive or direction to do some of the things that the boys in my care would enjoy (or should learn).
With the lack of BSA sponsorship, access to some of the campgrounds and resources that the Church currently enjoys will be lost. So although there will be more money to go around, some activities will cost more.
The Boy Scouts of America is a national institution, recognized by people regardless of their religious affiliation. Anecdotally, having an Eagle Scout rank is something that employers see as an asset in potential employees. Certainly, a Church program would be less likely to carry the same weight, even if its requirements were at all similar to those of the Eagle Scout.
There is very little commonly known about the new youth program-to-be. The official separation — and, I presume, the implementation of the new program — is scheduled for the beginning of 2020. Time will tell how many of the benefits and losses will be offset by the new benefits (and drawbacks) of the new program. My belief in an inspired leadership gives me great hope that many of these things will be addressed, and that the new program (which, incidentally, is to replace the current programs for boys and girls) will be successful.
One common refrain in the news is that the entire LDS membership of the Boy Scouts is going to go away. While that's most certainly an exaggeration, there is some truth to that. Without the encouragement inside of the Church to join Scouting, it's very likely that fewer boys will feel inclined to join. Also, something I've heard from many parents and have experienced myself, is that there seems to be very little time to fit in all of the extracurricular activities that all of your kids are involved in. With the Church bringing in its own program, boys that want to do Scouts will now have two activities to balance (Scouting and whatever the new program is) — and there's no guarantee that meeting times won't conflict. Also, although religion has been an integral part of the Boy Scouts (belief in God is a requirement, even if the manner of worship is completely open), I have heard various reports of non-LDS-sponsored packs and troops using Sunday as an activity day, which is something that most members of the Church try to avoid. Additionally, an LDS-sponsored pack and troop typically takes care of paying the dues for its members. Joining another troop will mean paying those membership costs out of the families' own pockets. (I have not had much personal experience in this, so I do not know how willing and how much other troops can assist with dues and fees for families that may need the help.) So even though the option to join Scouting is not out of the question, it will be more difficult for LDS youth to make that commitment. The BSA membership is probably going to take a significant hit. While it might not be 100% of LDS youth, it wouldn't surprise me to see it be very high.
The timing of the announcements has certainly been interesting. Several publications have noted that the announcement of the LDS/BSA split came within a week of the BSA announcing that they would change their program name from "Boy Scouts" to "Scouts" (as part of their move to include girls in the program), leading many to speculate that the name change was part of the cause. I won't deny that it certainly looks that way, but I suspect the reality is much more complicated.
The announcement of the split consisted of simultaneous press releases from both the Church and the BSA. While it may not be outside of the realm of possibility, I have a hard time believing that a coordinated PR move happens that quickly, which leads me to believe that both sides knew about this for a long time. Now, it's certainly possible also that the Church knew of the BSA's plans ahead of time as well, so there might yet be some merit to the idea that the Church made plans to leave because of the direction the BSA was moving. It's also possible that the BSA knew the Church was planning on striking out on their own with their own youth program, and the decision to include girls was made to try to mitigate the potential loss of LDS boys from their ranks. In other words, the cause-and-effect might be reversed.
Still, from the perspective of public perception (or at least the narrative that some press are driving), it does look like the Church is abandoning the Scouts because of the changes in the Scouts. While it may sound conspiratorial, I have to wonder if it wasn't timed to make the Church look bad, so everyone can point and laugh at the church, "Look at how those backwards Mormons run from simply having to include girls in their program!" On the other hand, if the timing were reversed, it may appear that the BSA would be the ones to appear reactionary, and the story would be, "Look at how the BSA is so desperate for members now, that they're changing their name to include girls!" The timing from the BSA could have been more defensive than offensive.
All told, I'm kind of looking forward to the change. While Scouting has had some great benefits for my boys and the other boys in the Church, I won't deny the administrative side has been a stress to deal with; and I'm hoping the new program will bring some positive change. I don't know if my own younger boys will continue in Scouting (my oldest is already an Eagle and moving into his adult life; my second oldest will be aging-out at nearly the same time the split becomes official), but it will be a choice we will prayerfully consider in the months to come.
I had one of those shower moments where I started replaying conversations and debates I've ever had or witnessed. For some reason, my mind had settled on the idea of universal healthcare (not something I've argued much on either side, but definitely witnessed a lot). Proponents often describe this as "free healthcare", which leads to opponents arguing that "it's not free" since it's paid for by taxes. I've even seen one argument that you'd have a hard time convincing a doctor to use his skill and many years of medical school learning and training for no cost.
"Yeah," said the voice in my head, "just like police and firemen should expect to be paid for their service."
And that's when it occurred to me. The proposal shouldn't be "free healthcare"; it should be "make healthcare a public service". Because that's really the truth. No one's really suggesting that anything be "free". They're suggesting that the costs be covered by society as a whole (i.e., government, paid through taxes), rather than by the individual using the service at that point in time.
While I can understand the appeal of calling it "free", I think proponents do the discussion a great disservice by using that word. It implies, at best, a fundamental misunderstanding of economics, and, at worst, a lie covering it up (since both sides know that health care costs actual money, that it's not really "free" at all).
Do I think a mere change in word choice will clear up the whole discussion? Absolutely not. There are still plenty of points to argue — quality of care, the ability of government to manage, and the actual cost for the public, just to name a few — I do think it would at least let us get past the part where we argue about "free" being "free" or "not free".
UPDATE: I have to discourage using this trick. For reasons I do not yet know, it doesn't seem to work with a large dataset. I do not know the exact point at which it fails, I just know that it does. I noticed that a significant number of values that should have been updated with text, actually got updated with nulls. As much as I would love to investigate this and try to see what is wrong and whether it's a failure in C#, SQL, or some combination, unfortunately, it's more important that my work actually get done; so I've had to abandon the XML route entirely.
Original post follows.
It's been a while since I've posted, well, anything. But I learned of a neat trick that I thought I'd post.
I'm currently working on a program that is converting data from two different sources into a single database. A lot of it is just done with carefully crafted SQL statements, but there are a few steps where I have to take data from one source and use some C# code to do some kind of processing before storing it in the target database. Since the data set is on the order of millions of rows, processing these records one at a time can be prohibitively time-consuming. And, since I have limited access to the SQL Server itself, using SQL CLR isn't a great option. (I probably could get the access if I needed to, but it will be an additional step to have to remember and configure when this goes to production, and the fewer moving parts I create for myself, the better.)
One of the tricks I've implemented is to use multi-threading to let the different steps run simultaneously — one thread extracts the records and puts them into a
ConcurrentQueue<>, another thread processes that and puts the results into another queue, and a third thread updates the records in the database.
I've been trying to come up with ways to do the update in batches. There are ways to create a stored procedure that will take a table parameter, and ways to call that stored procedure by binding the parameter to an equivalent DataSet, but I didn't like the idea of creating a DataSet object just to pass the records in. It just seemed too "heavy" to me. (Though it might've been faster than calling a command object in a loop for records one-by-one.)
Another option was to create a VALUES table and build the command text dynamically. But, since I was working with strings, I didn't like the idea of building dynamic SQL and having to escape quotes or any other special characters that might cause SQL to choke. (Not to mention it's just bad practice, even if my code is unlikely to be used as a SQL injection vector.)
So, I came up with the idea of passing in values as an XML document. By building the XML with Linq-to-XML C# code, all necessary character escapes would be performed automatically. I could pass in as many values at once as I felt comfortable with, and let SQL do the work in a batch instead of one at a time.
To give some context to this code, I am taking email addresses that were encrypted in the source database, and converting them to their decrypted values in the target database. At this point, my queue consists of objects that have two properties: EncryptedEmail and DecryptedEmail. Earlier in my conversion work, I've simply copied the encrypted strings over into the Email field of the table, so all this method has to do is update the table and changing the Email field to its decrypted value.
As a warning, because I know the boys will tell you as soon as you walk in the door, the internet is down.
As I was debating whether or not I should leave work or try to get one more thing done first, I got a text from my wife saying the internet was down at the house. It happens from time to time. Usually, I just have to reboot something, and usually, that thing is the wireless router that most everything connects to. It's a Rosewill, which I bought mostly because my trusty Linksys WRT-54G was having a hard time keeping up with all the devices we kept adding to the mix. The Rosewill supports wireless N and has a much better signal range, but maybe once every week or so, things just go a little "wonky" and it has to be rebooted. Just this past weekend, in fact, it ended up "jamming" my home network completely, sending so much traffic (that wasn't actually going anywhere) that none of my machines could hear each other. I didn't immediately know it was the router at the time, but when I went to the basement to check the servers and all the networking gear, that's when I saw the rapid flashing on the switch connected to the Rosewill router.
I didn't expect it to be a big deal this time, either, but I got a couple more texts with some more details. Of course, my wife had tried rebooting the wireless router already. (Even the kids know that, sometimes, you just have to go over to it, pull the power plug, wait a few seconds, and plug it back in.) When that didn't help, she went down to the basement herself, and she heard some high-pitched beeping from what she described as a small box with blue lights on it. She turned it off, waited a bit, and tried turning it on again; and when it started screaming at her immediately, she just turned it back off.
From her description, I knew the device in question was the UPS. It seemed strange that the UPS would be beeping like that, unless the power was off and it was running out of batteries or something. It's a common story in tech support circles to get a call from someone who claims their computer doesn't work, and only after troubleshooting for a while does the clueless user say something like, "Well, I can't quite see, because the power is out and it's dark in here." I didn't believe my wife would fail to mention a power outage, though, so I figured it must be something else. The UPS going bad, perhaps? A tripped circuit breaker that cut the power to that outlet?
I got home and went downstairs to check things out. It was very quiet, which seemed like a bad sign. Two servers — the email server, and the main server that does just about everything else — are plugged into the UPS, but a third — the media server, which stores all our DVDs for easy access — is plugged straight into the outlet. If the UPS were bad, the media server should still have power. At this point, I'm thinking it's a tripped circuit breaker.
I go out to the power box to check the breakers, but none are tripped. I locate the one going to the servers and flip it off and on, just in case; then I head back inside. At this point, I'm getting a little concerned. We ran that power line ourselves; did we do something really wrong in the process? It's been fine for a few years, though. If it did go bad, what can I do to get power to that corner of the basement while we figure things out?
Back in the basement, the outlet still has no power. I noticed, though, that we installed a GFI outlet. Maybe that's what tripped. I pushed the red "reset" button, and as soon as it clicked, the media server hummed to life. Ok, that's what's keeping things down. Now, I'll bring things back up and see if it trips again, and then figure out what's causing the problem. I turned on the UPS, which gave only the slightest of beeps. The email server gave a soft beep as it got power, and then….
A little background on the main server. This thing runs pretty much everything. It is the only thing connected to the cable modem on one network card, and another network card connects to the switches that distribute internet traffic to the rest of the house. It was a machine I built several years ago, picking out the parts and assembling them myself. I didn't look for anything special in the case, but the one I happened to find on sale had some interesting LED lights on the fans and a clear side window, so you can see everything inside. I didn't even know about these features of the case when I bought it; I was just looking for something that would hold all the parts together for a decent price. Over the years, the server has been carefully configured to do everything I need it to. It has a web server, which is mostly used by my wife for her web design work. It has a minimal email server, which does some preliminary filtering before passing email on to my "real" email server inside the network. It does the firewall and routing, with some hand-crafted iptables scripts to make sure bits go where they're supposed to. It has a DNS server, which is configured to give easy access to important devices on the network by name, plus has the bonus of having a few hundred known advertising sites redirected to the address 0.0.0.0 as a convenient, network-wide ad block. (Fun fact: I tried to do the same with porn sites, but when I got a list of known sites and fed them into my DNS server, it promptly crashed. There were just way too many to filter out wholesale.) It also has a large file store with an FTP server used internally to back up, share, and keep files we want to hang on to.
Anyway, as the main server got its turn to power up, there was a series of three or four very loud POPs, accompanied by a bright flash that could be clearly seen through the case's clear side panel. Accompanying the popping noise, I shouted something that I don't quite remember. And then everything went quiet again as the GFI switch once again tripped and cut the power. A thin tendril of blue smoke leaked out of the power supply fan of the main server, and the smell of fried electrical parts hung in the air.
I went upstairs and told my wife the bad news. The server just exploded.
My wife helped me get the server unplugged (mostly because, even with the power cut, I was still a little terrified to touch the thing after what I had just seen), and I took it upstairs where I had more light and began taking it apart. There were a few cobwebs and a lot of dust inside, but no obvious sign of what blew up. Unfortunately, there's no real easy way to tell what may be good and what may be dangerous. Unwilling to risk frying any more components than necessary, I resigned myself to having to buy a new machine and rebuild.
I can only hope at this point that the hard drives are ok. The server contained four in total — two smaller ones that held most of the OS, and two larger ones that made up the file share, each pair in a RAID-1 array. But without access to the internet, downloading the appropriate installation media would be tricky. Not that I had a replacement server handy, anyway. First things first, find a replacement.
I took a quick trip to the nearest electronics-type store, that being Best Buy. I knew it was probably a long shot going in there, and, unfortunately, I was right. Plenty of laptops and costly consumer desktop systems, but nothing that would be good for a server. I wasn't willing to overspend on a system that wasn't suited for the task.
My next bet was Micro Center, which was a half hour away. Unfortunately, that, too, was a wasted trip. Pre-assembled systems were limited to the desktop and laptop variety. They do have a large array of components for building machines from parts, but, being perfectly honest with myself, I was not in a frame of mind to start piecing one together in a hurry. If I'm going to build something, I want to take the time to research, and really put together what I want for the best value. But I need a server, and quick. I figured Amazon is probably going to be my best bet.
In the parking lot of the Micro Center, I double-checked Amazon's site. (I had looked before I left the house, but I didn't commit to anything as I wanted to at least try to buy something from a local store that I could take home and start working on that night.) I found a couple possibilities, but my biggest issue was trying to find the internal specs on the machines. This mini-tower server looks like a good deal, but does it have the internal space and ports for four full-sized SATA hard drives? I don't know if it was because I was trying to use the mobile website, or if their site was really lacking that information, but I found it really hard to find. (I probably would have found more details on NewEgg, but I was wanting to take advantage of Amazon's better prices and faster shipping.) I found one that actually included a mention of "space for 6 drives" in the description, placed the order, and elected to pay extra for one-day shipping.
On my way home, I started to go over my options. I wouldn't be able to restore the web and file server until the new machine arrives, but what could I get up and running now? I had that old Linksys wireless router, which I had installed DD-WRT firmware on — meaning it is something that is very configurable and something I could really tweak. That, I figured, could take the duty of routing and firewalling for the internal network, and we would at least have internet access again. Email might be a bigger problem, though. Sure, the email server was alive, but the way I had it configured, I depended on the main server to filter email first. Maybe some of the security settings I had applied in the not-too-distant-past would allow me to grant it more direct access to the internet without becoming an open relay for spam mail. But that could be a secondary task.
I got home and set to work, hooking up the Linksys router in the place of the main server. I had some issues getting it configured, since my prior tinkering with the device (when it was just a toy to play with) had left it in a weird state. I ultimately had to reset it to its default state and rebuild it from there. DD-WRT has a very convenient web-based interface, though, and it took me much less time than I expected to get things to a working state. The thing that slowed me down the most was the fact that devices on the network still remembered their configuration from the main server, and didn't immediately update to point to the Linksys router when I brought it online.
With that accomplished, I figured I'd try setting up email. I did have a few small issues configuring the network, but again it came down to having to just reboot the server a couple times to force it to update its network configuration. I had some issues from there trying to get some external email server testing programs to talk to my email server, and that slowed me down a bit. It turned out that the email server didn't take too kindly to being forcibly rebooted, and the email services just plain hadn't started up. (It's amazing how much better things can work if the expected program is actually running.) I forwarded the secure email ports through the firewall easily enough, but I wasn't too sure about opening up the unsecured email port required to let outside email come in. It turned out much better than I expected. The security settings I had enabled recently were working perfectly. I ran a couple different open relay tests against my web server (which is something I always, always do when I tinker with the email server — last thing I want to do is to get shut down because my email server is sending out everyone else's spam mail), and it passed perfectly.
So, now I'm back up and running with internet access and email. The major items are taken care of, so everything else from here can get rebuilt on a much less rushed timeline. (Still want to do it quickly, but it doesn't have to be done yesterday.)
Time to count the blessings and see what I learned.
The biggest blessing is that nothing burned down. The GFI outlet tripped, but the UPS at that point should have still been providing power. Near as I can figure, it also detected something was wrong and cut power, then beeped as an alarm. When my wife turned it off and back on, it must have been able to still detect the problem and not try powering on the server. I'm not sure what changed when I got to it later, but when I tried turning things on and it started making loud boomy noises, the GFI tripped again and the UPS just shut itself off immediately. If it hadn't, there could have been much more damage done, and possibly an electrical fire as well. (I'm still keeping my fingers crossed that the hard drives aren't fried.)
We're up and running. Email and internet are the most important things we have to keep going, especially with one child doing homeschool and taking lessons over the internet. I pay for a backup email server that, when our server is down, will receive and hold our email in a queue until our server comes back online; so we haven't lost any email.
With the Linksys router doing the routing and firewall duties, I can rebuild the main server and just keep it behind the firewall itself, without having to configure it for routing as well. Whenever the server has an issue in the future, I won't have to bring down the whole network. Plus, keeping the file share off of the computer exposed to the internet is a better setup anyway.
I should probably look into offsite backups. While I can hope that the hard drives didn't get fried, if it turns out that they did, I could be up a very smelly creek in a barbed wire canoe without a paddle.