SSL, Firewalls & Security Basics Your Host Should Handle
SSL, firewalls, and the security basics your host should handle without being asked
There’s a question we ask agencies and business owners when they first talk to us: who’s managing your SSL certificates?
Learn more about what fully managed hosting actually includes, or see how we handle email security with DKIM, DMARC and SPF configuration.
Learn more about what fully managed hosting actually includes, or see how we handle email security with DKIM, DMARC and SPF configuration.
The answer tells us everything. If it’s “me,” “my developer,” or worst of all, “I think they auto-renew?” — we already know the rest of the security picture. Patching is behind. The firewall is either default or non-existent. Nobody’s watching the logs. And the hosting provider’s idea of “managed” extends to keeping the server powered on.
Here’s the uncomfortable truth about managed hosting security in Australia: most hosts use the word “managed” to mean they’ll reboot the box if it falls over. Actual security — the kind that stops your client’s site from becoming a credential-harvesting operation — is either an upsell, a third-party bolt-on, or simply not offered.
This piece walks through what a proper security stack looks like when your host genuinely manages it. If you’re handling any of these yourself, it’s worth asking what you’re paying your host for.
SSL: the bare minimum that still gets botched
SSL certificates shouldn’t be something you think about. Ever. The entire issuance and renewal pipeline has been automatable since Let’s Encrypt launched its ACME protocol back in 2015. That’s a decade of automation maturity. If your host still emails you to “upload your certificate” or charges $50/year for a DV cert, they’re selling you a solved problem.
Here’s what proper SSL management looks like on a well-configured server:
Automatic provisioning. When a new domain is pointed to the server, certificate issuance should trigger automatically. Whether that’s through Certbot, acme.sh, or a panel-integrated ACME client, the process should be hands-off. A site goes live, HTTPS works. No tickets, no waiting.
Automatic renewal. Let’s Encrypt certificates expire every 90 days by design – it forces automation. A cron job or systemd timer should renew well before expiry (typically at the 60-day mark). If renewal fails, monitoring should alert the hosting team, not you.
Proper TLS configuration. Having a certificate is step one. Configuring it correctly is step two, and it’s where most shared hosts fall short. That means enforcing TLS 1.2 as a minimum (TLS 1.3 preferred), disabling weak cipher suites, enabling HSTS headers, and configuring OCSP stapling so browsers don’t stall on revocation checks. Run your site through Qualys SSL Labs – if you’re not getting an A or A+, something’s misconfigured.
Wildcard and multi-domain support. If you’re running subdomains for staging, client portals, or SaaS-style setups, your host should handle wildcard certificates via DNS-01 validation without making it your problem.
This is table stakes. But we routinely onboard sites running expired certificates, TLS 1.0, and cipher suites that were deprecated before the pandemic.
Web application firewalls: your first line of real defence
A firewall that only operates at the network layer — blocking ports and rate-limiting connections — isn’t enough for modern web hosting. The attacks that actually compromise WordPress sites, WooCommerce stores, and custom PHP applications happen at Layer 7. They look like normal HTTP requests. They arrive through the front door.
That’s where a web application firewall (WAF) earns its keep.
ModSecurity with the OWASP Core Rule Set (CRS) is the industry standard for open-source WAF protection on Apache and Nginx. It inspects incoming requests against a library of rules that detect SQL injection, cross-site scripting (XSS), local file inclusion, remote code execution, and dozens of other attack patterns. The CRS is actively maintained and updated as new attack vectors emerge.
But here’s the thing about ModSecurity: it’s only useful if it’s properly tuned. Out-of-the-box CRS with paranoia level 1 is a starting point. A managed host should be:
- Tuning false positives so legitimate traffic (form submissions, admin operations, API calls) isn’t blocked
- Adjusting paranoia levels based on the application profile – a static brochure site and a WooCommerce store with payment processing have very different threat surfaces
- Maintaining custom rules for application-specific attack patterns, especially around WordPress REST API endpoints and wp-admin
- Monitoring WAF logs to identify patterns that indicate targeted attacks versus background noise
If your host installed ModSecurity, turned it on, and walked away — you’ll find out when a client calls because their contact form stopped working. Worse, you’ll find out when an attacker bypasses stale rules and the host shrugs because “the WAF was enabled.”
At Black Label, our firewall configuration is active. We review blocked requests, tune rules per-site, and update rulesets on a schedule that doesn’t wait for the next major version bump.
Brute-force protection: stopping the constant hammering
Related: AI is making cyber attacks smarter — here’s what that means for your website
Every server connected to the internet faces a relentless stream of automated login attempts. SSH brute-forcing starts within hours of a new IP being provisioned. WordPress login pages get hammered by credential-stuffing bots around the clock. This isn’t targeted — it’s automated, industrialised, and constant.
Related: AI is making cyber attacks smarter — here’s what that means for your website
fail2ban is the standard tool here, and it’s non-negotiable on any properly managed server. It monitors log files for patterns — failed SSH logins, repeated 401/403 responses, XML-RPC abuse — and dynamically adds firewall rules to block offending IPs. A sane configuration bans after 3–5 failed attempts, escalates to longer bans for repeat offenders, and permanently blocks the worst actors.
But fail2ban alone isn’t the full picture:
Connection rate limiting at the web server level. Nginx’s limit_req and limit_conn modules (or Apache equivalents) throttle requests before they even reach PHP. This catches volumetric attacks and aggressive scrapers that aren’t technically “failed logins” but are still burning server resources.
WordPress-specific protections. XML-RPC should be blocked or heavily restricted — it’s the most abused WordPress endpoint and almost no legitimate use case requires it in 2026. wp-login.php should have rate limiting applied independently of fail2ban. REST API user enumeration should be disabled unless actually needed.
SSH hardening. Password authentication should be disabled entirely in favour of key-based auth. Root login should be prohibited. Non-standard SSH ports eliminate 99% of automated scans — not a strategy on its own, but it cuts log noise dramatically.
Your host should handle all of this at provisioning time. Not after the first incident.
Malware scanning: catching what gets through
No security layer is perfect. Plugin vulnerabilities get exploited before patches land. Compromised credentials from other breaches get reused. A realistic security posture assumes breach and builds detection around it.
ClamAV provides baseline file scanning, but it’s primarily designed for email-borne malware. For web-specific threats — PHP backdoors, obfuscated webshells, injected JavaScript credit card skimmers — you need tools that understand web application malware patterns.
A good managed host runs:
- Scheduled filesystem scans that check for known malware signatures, suspicious file modifications, and newly created PHP files in upload directories (a classic indicator of compromise)
- File integrity monitoring that alerts when core application files change outside of a known update window – if wp-includes/version.php changes and nobody ran an update, that’s a red flag
- Real-time upload scanning that catches malicious files before they’re written to disk, not after
- Database scanning for injected content – compromised sites often have malicious JavaScript stored in post content or widget areas that filesystem scans won’t catch
When malware is detected, the response matters as much as detection. A managed host isolates the affected account, identifies the entry point (not just cleans the symptom), patches the vulnerability, and documents what happened. If your host’s response is “we found bad files and deleted them,” the reinfection is already underway.
Patch management: the unsexy work that prevents disasters
Most successful attacks exploit known vulnerabilities with available patches. Not zero-days. Not sophisticated nation-state tooling. Known bugs with fixes that nobody applied.
Patch management on a managed hosting platform covers multiple layers:
Operating system packages. Security updates for the kernel, OpenSSL, glibc, and system libraries need to be applied promptly — within days for critical CVEs, with a tested rollback plan. Unattended-upgrades (Debian/Ubuntu) or dnf-automatic (RHEL-family) handle routine updates, but a managed host reviews what’s being applied rather than blindly auto-updating production.
Web server and runtime updates. Nginx, Apache, PHP, MySQL/MariaDB — each has its own release cycle and security advisories. PHP version management is particularly important: running PHP 8.0 in 2026 means running unsupported software with known, unpatched vulnerabilities. Your host should be migrating sites to supported versions, not waiting for you to ask.
Control panel and tooling patches. Whatever management layer sits on the server — Virtualmin, Plesk, cPanel — needs its own update cycle. These panels have broad system access, and vulnerabilities in them are high-value targets.
Application-level updates. At minimum, a managed host should notify you about critical WordPress core and plugin vulnerabilities. Better hosts handle security updates automatically with pre-update snapshots, so a broken update is a 30-second rollback rather than a crisis.
The key distinction: a managed host has a patching policy. They can tell you their SLA for critical security updates. They track what’s running and what’s behind. If you ask your host “what’s your patching cadence?” and get a blank stare, that tells you everything.
Server-level hardening: the stuff you never see
Beyond the visible security stack, a properly configured server has hardening applied at layers most site owners never interact with:
- Kernel-level protections: sysctl tuning to prevent IP spoofing, SYN flood mitigation, and restrictions on core dumps
- Process isolation: each hosting account running under its own system user with strict filesystem permissions – one compromised site can’t reach every site on the server
- Restrictive PHP configuration:
open_basedirlimiting filesystem access to the account’s own directories,disable_functionsremoving dangerous calls likeexec()andpassthru()unless actually needed - Automated backups with offsite storage: your last line of defence when everything else fails. Daily backups, retained for 30 days minimum, stored in a different AWS region from production
This is the infrastructure-level work that separates a server from a managed server. It happens at provisioning, it gets audited periodically, and it’s invisible until the day it saves you.
The real test: what happens at 2am on a Sunday
Security tooling is one half of the equation. The other half is response. When ModSecurity flags a surge in SQLi attempts at 2am, does anyone see it? When a malware scan detects a webshell in a staging environment on a Sunday afternoon, how fast does it get handled?
Managed hosting security in Australia isn’t just about what’s installed — it’s about who’s watching and how quickly they act. Automated detection with nobody reviewing the alerts is just theatre.
The host charging $8/month that promises “managed security” doesn’t have the margins for a team that monitors, responds, and remediates. The maths doesn’t work.
If you’re doing this yourself, your hosting isn’t managed
Here’s the litmus test. If any of the following apply to you or your development team, your host isn’t providing managed hosting:
- You’ve manually installed or renewed an SSL certificate in the past year
- You’ve configured firewall rules on your hosting server
- You’ve cleaned malware from a client’s site without your host’s involvement
- You’ve applied server-level patches yourself
- You don’t know whether your host runs a WAF, or what rules it uses
- You’ve never received a proactive security notification from your host
None of these should be your problem. Not if you’re paying for managed hosting. Your job is building websites, running campaigns, or operating your business. The difference between managed and unmanaged hosting isn’t a marketing label — it’s whether someone’s actually doing the work.
What Black Label includes by default
Every Black Label Hosting account — from the $25/month Essentials plan to our managed VPS environments — ships with the full security stack described in this article. SSL provisioning and renewal, WAF with active tuning, brute-force protection, malware scanning, patch management, and server hardening. On AWS infrastructure, monitored by people who know what they’re looking at.
If your site has already been compromised, follow our guide on how to safely restore a hacked WordPress site.
Not as add-ons. Not as paid extras. As the baseline.
Because that’s what managed actually means.
Ready to stop worrying about server security? Talk to us about hosting that handles it properly from day one.