Hosting E-mail @ Home

Having been in the ISP and hosting industries from roughly 1996-2014, living through the transition of mail servers from being something that everyone runs themselves as a matter of course to a nightmare of spam, block lists, and becoming the favorite entry point of threat actors that any rational person and company chooses to outsource to entities with far more resources to dedicate to the challenges, I’ve long scoffed at people who want to run mail servers at home.

Your ISP, the providers of mail services large and not-so-large, and the purveyors of block lists and anti-spam/threat gateway services all want to make this very difficult for you. Deliberately.

People find a way, often using the free tier of some bulk mail sending service to route their outbound mail, or perhaps getting lucky DIY-ing it with a VPS provider who has managed to keep their IPs off the naughty lists.

I prefer not to be a freeloading non-paying customer of anyone, as “free” services and “free tiers” over time tend to become more restrictive or disappear, and I know it’s foolish to believe any VPS provider that allows SMTP traffic isn’t going to eventually have a block list go nuclear on their entire IP space, so I’ve long chosen to pay Microsoft for their lowest-cost o365 account to have e-mail for my own domains and I think the $6/m or whatever is a fair price to pay.

But I’ve had this theory that one could use Microsoft’s Exchange Online Protection Plan, which costs just $1/m* per user, to cheaply avoid many of the pitfalls of self-hosting e-mail servers. Today I put it to the test with a domain I’d registered last week. Partly to prove the point and partly for the opportunity to mess around with Synology Mail Plus.

It absolutely worked. Here are the steps I followed:

  1. Installed Synology Mail Plus.
    • Configured for my domain.
    • Activated a single user.
    • Configured SMTP Smart Host pointing to my VPS over a VPN tunnel.
  2. Configured port forwarding and firewall rules to only allow TCP/25 connections to the Synology from o365 IP ranges.
  3. Clicked ‘Buy Now’ at Exchange Online Protection Plan.
    • Created a new o365 tenant account.
    • Handed over my credit card info for the $1/m charges.
  4. Added my domain to o365 tenant.
    • admin.microsoft.com -> Settings -> Domains.
    • Let Microsoft make the changes to my Cloudflare DNS.
    • Adjusted the automatically created SFP DNS record to include my VPS IP, along with my static IP for completeness.
  5. Change the Domain Type to InternalRelay.
  6. Created Connectors for Inbound and Outbound e-mail.
    • admin.exchange.microsoft.com -> Mail Flow -> Connectors.
      • For Inbound I have a static IP and port 25 isn’t blocked, so I configured that to connect directly to my mail server.
      • For Outbound I am blocked on port 25, and for the purposes of a Connector there are no alternate port options, so I proxy through the VPS using nginx.
      • Outbound requires either having a static IP or using a certificate for auth. I went with IP auth.
  7. Enabled DKIM
    • security.microsoft.com -> Policies & rules -> Threat policies -> Email authentication settings -> DKIM
    • Added DKIM DNS records.
  8. Added DMARC DNS record

Microsoft could do a much better job of unifying all these different administration portals, but overall it wasn’t that bad. I tested by sending an e-mail to my Gmail account and it was successfully delivered to my Inbox, having passed SPF, DKIM, and DMARC:

Image of Gmail "Original Message" view showing that SPF, DKIM, and DMARC all passed for a e-mail received from my home e-mail server.

And, mind you, this was a first message from a domain on a less-common .TLD, registered days ago with no prior reputation.

$1/m* seems pretty cheap to ride on Microsoft’s coattails for e-mail delivery, along with getting 1-day of inbound mail spooling, plus whatever value Exchange Online Protection provides as an anti-spam gateway. It’s not abusing the service in any way — this is what any cloud-based anti-spam service needs to function. EOP is special because Microsoft o365 services don’t have monthly minimum charges.

Of course, there are providers out there specifically offering inexpensive inbound MX spooling and outgoing SMTP relay services to the frugal self-hoster. Dynu, for example, is $9.99/yr per domain and service.

I might be halfway to talking myself into bringing all of my e-mail home. For my collection of domains and potential user count, EOP is dirt cheap.


* Plus the price of a VPS if dealing with port blocking, but we all need a VPS for something anyways, right? And that EOP license is “per user” but Microsoft won’t know how many users you have unless they ask you / audit your o365 licensing, and “Yep, just the X, it’s just for me [and my X-1 family members / friends / employees], and EOP is the only service needed right now” ought to be a perfectly acceptable response.

Synology, Part Deux

Dave Jansen’s Synology DS920+ Final Impressions has some insight into why a Synology may not be a great alternative to DIY for those who are reasonably technical. I was fortunate to be gifted a very old Synology that was destined for e-waste disposal so my approach has been to figure out if it was good enough at anything to be worth not just taking the drives and disposing of it, as opposed to buying based on the marketing hype and discovering the hard way that it doesn’t quite live up to expectations. Worst-case for me was that I’d have overpaid a few bucks on shipping.


Amazon has been putting the DS1522+ on sale for $579.99 so I decided to go ahead with that plus the 10GbE card and a pair of 512GB Samsung 970 EVO Plus for cache. It stings but for me the value of Active Backup for Business alone makes it worth the price over a timespan well under the 3-year warranty period, and Synology’s track record is such that I expect its useful life should extend until at least 2029. Zero percent financing for 12 months on my Amazon VISA soothes the immediate pain to my wallet.

A week in and I’m pretty happy with that decision. Local incremental backups of my laptop, four servers, and 10 VMs, running simultaneously, now finish in under 10 minutes — inconceivably fast compared to what I was used to with my previous urbackup setup. For giggles I decided to see how well it could handle backups of the mini PC at our cabin running Hyper-V and four VMs over a VPN that can manage about 3.5Mb/s upload. Took a whole-ass day for the first pass but now the dailies are running around 1:45, which is an absolutely acceptable window for hogging all the precious upstream during hours when nobody should be awake.

The kids’ laptops remain a challenge… but reality is that it’s hard to care that much. They probably don’t have any / much data worth caring about that isn’t in a cloud somewhere. And eventually they’ll learn to appreciate backups the hard way, like our ancestors have done for centuries.

Next I need to figure out backing this thing up off-site to our cabin. Backups of my backups. Because if a fire can take out all of your original data along with the backups, it’s not that great of a backup. There’s no good space to put a 2U RackStation, and truth be told, I don’t want to leave anything there that would inconvenience me to replace because it goes unoccupied for long periods and it’s meth country.

Might be a good use case for one of my retired HP Microserver Gen8 systems with MinIO as an S3 backup target for Hyper Backup. Or going dirty with Xpenology. TBD.


In the meanwhile, I’m slowly exploring more of what DSM has to offer. I’ve started using Synology Photos to backup my iPhone camera roll, which is the single biggest feature that has kept me attached to Dropbox. From that perspective, Photos does all that I need. Facial recognition and object classification range from underwhelming to hilariously bad, but those aren’t features I care all that much about today.

(In my dreams, Synology Photos, or PhotoPrism, or any of the other open source alternatives would have an integration for Google Vision AI and/or Amazon Rekognition — they’re reasonably affordable at the “80% of every photo and video I’ve ever taken is presently on my phone” scale)

I’ve also got Cloud Sync doing its thing with my Dropbox, Google Drive, and OneDrive for Business accounts. That’ll replace having it shared out from my former backup server VM. And I’ve consolidated the rest of my file shares — they don’t see much use, but getting them all in one place is… something entirely unimportant me. DSM makes that all much more pleasant than TrueNAS tho.

Synology Drive Server is next on my exploration todo list. Because I’ve been wanting to stop paying for Dropbox for a long time but it’s hard to fight against the inertia and all the little ways that Dropbox burrows itself into your life. In the general everyday sense my needs are so basic that literally anything that reliably syncs files and has an iPhone app is an adequate substitute, but I have a few apps that use Dropbox to sync settings/data across devices, and a few things that perform their own backups directly to Dropbox. I suspect that some of them will be stuck on Dropbox and I will have to come up with a process to stay under the free tier limits.

Synology

A friend recently sent me a Synology RackStation that was destined for e-waste. Full of drives no smaller than what I feed my existing storage server, no less. A good friend indeed, amiright?

He said it had been upgraded to the 6GB “maximum” — 2GB “onboard” plus a 4GB DDR3 SO-DIMM. I don’t know much about Synology hardware but in the past I’d randomly acquired the knowledge that sometimes the “onboard” RAM is actually a SO-DIMM on the underside of the board.

Underside of Synology motherboard showing additional SO-DIMM socket

And sure enough, there it is. Seems an odd design choice given that this RackStation’s motherboard is so much larger than it needs to be… but I guess odd choices are the norm for companies that tie their software and services to seemingly over-priced custom-engineered hardware instead of just selling software and services on their actual value.

Synology System Information screenshot showing 16GB RAM recognized.

So my RackStation now has 16GB RAM. In theory this system should support 32GB RAM but 16GB DDR3 SO-DIMMs carry around a 10X premium over 8GB so I’m not about to find out.


Dashboard of the Synology Active Backup for Business

To me the killer Synology feature is Active Backup for Business, which is only available on certain models (+ / x64?). As a total slut for centrally-managed backups and bare-metal restores, I moved to urbackup after Microsoft abandoned the fantastic client PC backup system included with Windows Home Server & Server Essentials. Urbackup is about the only Open Source backup system that does Windows decently, is properly multi-platform and multi-arch, and offers change block tracking and Hyper-V host-based VM backups as commercialized add-ons for reasonable fees (or free via the community edition of their commercialized virtual appliance).

ABB is much better by most measures. It’s prettier. It’ll do agentless VMware and Hyper-V VM backups. It can backup “unsupported” platforms via rsync and SMB. Backup times are fast — none of my daily tasks run over 15 minutes — and with a household full of laptop users that’s critical to keeping them current. I’ve yet to try a bare-metal restore but individual files and whole VMs run about as fast as the storage/network can muster.

I see a few areas where ABB could do better:

  1. Backup task settings are individual to the device. There are Templates whose settings are applied at the initial creation of a device’s backup task, but after that the task’s settings are independent from the original Template. There’s no mechanism to perform changes in bulk. It is possible to create a new task for multiple devices at once, but that will create individual tasks for each. Backed up data is tied to a particular task and the interface warns that removing a task will remove all the data, so that’s not a path to faking bulk updates. 

  2. From the Portal, the presentation of BitLocker-encrypted volumes within Hyper-V VMs is concerning. BitLocker-encrypted volumes from “PC/Mac” and “Physical Server” backup tasks are visible and browsable through the Portal like any other volume, but from a Hyper-V VM backup the volumes do not show up in the Portal at all. I tested an Instant Restore to Synology’s Virtual Machine Manager — the volume was properly restored and, unexpectedly, VMM provided vTPM functionality so the VM operated normally after initially entering the recovery key.

    So this is a case of the Portal interface being misleading and not an actual problem. 

  3. ReFS volumes are not supported. ReFS is over a decade old and still struggles with 3rd-party support. Heck, it’s not clear that Microsoft really wants to support it as a general-purpose filesystem. Which is sad because we’ve got nearly 20 years of ZFS advocates shouting at us that copy-on-write, checksumming filesystems are the greatest thing since the hierarchical filesystem and if you’re not using one you don’t care at all about your data and probably kick your dog.

    I mostly use ReFS for Hyper-V datastores so this is an effective way to filter them out from backups of a Hyper-V host as a “Physical Server” without having to manually customize their backup tasks. 

  4. BTRFS volumes are not supported. Which is odd because it requires BTRFS for backup storage. Despite its protests, in my testing it did backup an LVM-based BTRFS system but does not restore LVM to a usable state. BTRFS within a Hyper-V VM was fine. 

  5. Linux Agent is x64-only. If you want to backup ARM/MIPS/RISC-V/32-bit Linux devices, you’ll be doing it old school via rsync or SMB. But at least it’ll be a centrally-managed pull instead of an unmanaged client-initiated push where you’ll need to come up with some other method to notice when your backup jobs have failed (you always have monitoring of your important cronjobs, right?) 

  6. Desktop “PC/Mac” and Windows/Linux “Physical Server” are handled slightly different. A Windows “Physical Server” backup can be restored to VMware/Hyper-V/VMM while a “PC/Mac” backup cannot. A Windows “PC/Mac” device can be changed to a “Physical Server” but not the reverse. And, sorry Linux desktop users, you can only be a “Physical Server.” There’s also a minor scheduling difference, see below. 

  7. Backup scheduling is rigid. Backup tasks are scheduled for specific times and days-of-the-week and will not be made up if missed or interrupted. For PC/Mac backups it is possible to have a backup task triggered when a user logs off, the screen locks, and/or at startup, but for laptop users those may not be sufficient to stay within desired backup intervals.

    With all of the backup systems I’ve previously used, I would define backup windows and target intervals and the system would figure out when to actually initiate backups. Missed or interrupted backups would be made up automatically at the next window or availability of the client, depending on the configuration. 


A problem for Future Me is that DSM 7.2.x will go out-of-maintenance in mid-2025 and it’s probable that 7.3 will not support this hardware. The current nearest equivalent is the RS2423+ at $1,999.99. That’s a big chunk of change to spend up front for backups over the 7-9 years of expected support. A RS822+, DS1522+, DS923+, DS723+, or even DS423+ might be suitable for Future Me’s primary use case of backing up other systems, I’ll need to see how much storage backups consume after soaking for a year… but it’s hard to get over my preference for software that doesn’t lock me into hardware.

Taylor Swift: It's me. Hi. I'm the problem, it's me.