MIZANIC
AWS Marketplace CIS-aligned Maintained Published on AWS Marketplace

Hardened images

Pre-hardened OS and prepackaged software images, listed on AWS Marketplace. Golden baselines you can deploy in minutes, with security and compliance posture baked in. Maintained, patched, signed.

What's in the box

Production-ready out of the gate.

Each image is built with the assumption it lands in a regulated production environment on day one.

Baseline

OS hardened to CIS Level 1 or Level 2

Configured to CIS benchmarks with documented exceptions. Audit-ready out of the gate. STIG variants available for federal and defense workloads.

Agents

Observability and management baked in

SSM, CloudWatch agent, vulnerability scanner, and EDR hooks pre-installed and configured to talk to your tooling.

Identity

SSO-ready, least-privilege

Pre-wired for SSO and instance-profile patterns. No long-lived credentials. Documented IAM policies that ship alongside the image.

Lifecycle

Patched, signed, auditable

Cadence-based patching, signed images, and a published advisory log. Your auditor can trace any image back to its build.

How to deploy

Start from a hardened baseline. Customize from there.

Marketplace images are particularly useful when you're standing up a regulated workload on a tight timeline. Instead of building a hardened baseline from scratch, you start from ours and customize for your specific compliance, identity, and operating context.

For consultancies and SIs with their own Marketplace presence: we also co-list and bundle on partner terms — mention it on the form.

Discuss deployment
— Catalog (illustrative)
linux Ubuntu LTS · RHEL · Amazon Linux
windows Server 2022 STIG
k8s nodes EKS-optimized hardened
db Postgres · MySQL · MongoDB hardened
app Nginx · Tomcat · OpenSearch hardened
private-ai vLLM · Ollama · embeddings · vector store

Featured listing · Private AI

A hardened Private AI image, ready for regulated workloads.

The same hardening discipline applied to an open-source AI stack. Deploy it inside your VPC and run inference, embeddings, and RAG without your data ever leaving the account.

Private AI · GPU image

What ships in the image

  • Open-source LLM runtime (vLLM / Ollama) preconfigured for GPU instances, with Llama, Mistral, and Mixtral images ready to pull.
  • Embeddings + vector store wired in (pgvector / OpenSearch) so RAG workloads boot without an integration sprint.
  • Network posture defaults to no-egress: traffic stays inside your VPC, no third-party API calls, no model telemetry leaving the account.
  • SSO-fronted inference endpoints with per-tenant rate limits and audit logging for every prompt and completion.
  • GPU drivers, CUDA, and observability hooks pre-baked — patched on the same cadence as the rest of the catalog.
— Inside the Private AI image
runtime vLLM · Ollama · TGI
models Llama · Mistral · Mixtral
rag pgvector · OpenSearch
network VPC-only · no egress
identity SSO · per-tenant audit
ops GPU drivers · CUDA · patched

Marketplace FAQ

Common questions about the hardened images.

Where are the images published?
On AWS Marketplace, under the Mizanic seller account. Each image is signed, versioned, and traceable back to a published advisory log so your auditor can confirm provenance.
What does CIS-aligned actually mean for these images?
Each OS image ships configured against a Center for Internet Security benchmark — Level 1 for general production, Level 2 for high-sensitivity workloads. We document the exceptions where a control is impractical, with the security rationale, so auditors and compliance teams can review the deviations in one place. STIG variants are available for federal and defense workloads.
Can we customize the images, or do we need to use them as-is?
Use them as a baseline and customize from there. Most clients fork the build pipeline, add their internal CA roots, monitoring agents, and compliance overlays, then maintain their own derivative AMI cadence on top of ours. The hardening discipline carries forward; the local customization stays local.
How does the Private AI image differ from a self-hosted LLM setup?
The image arrives with the runtime (vLLM, Ollama, or TGI), the GPU drivers and CUDA, the RAG building blocks (pgvector or OpenSearch), and network defaults that keep traffic inside your VPC — no egress, no third-party API calls, no model telemetry. Inference endpoints are SSO-fronted with per-tenant rate limits and per-prompt audit logging. You're not glueing together a stack; you're booting one.
Are images patched on a schedule?
Yes. Cadence-based patching with a published advisory log that records every CVE addressed, every base-image update, and every signing event. Subscribers get notification when a new version ships; old versions remain available for the documented support window.

Want a hardened baseline for an upcoming engagement?

Tell us the workload, the compliance bar, and the timeline. We'll point you at the right starting image.