sizing

How to size a server for 100 users?

SCRAM Consulting Editorial Team · Updated: May 2, 2026

Direct answer

There is no single answer for "server for 100 users": sizing depends entirely on the workload. For 100 typical office users, the most common loads are file/print + Active Directory (needs 4-8 cores, 16-32 GB RAM, ~2 TB storage), ERP/accounting (8-16 cores, 32-64 GB RAM, low to medium IOPS), or virtualization server with 5-15 VMs (16-32 cores, 128-256 GB RAM, NVMe). Practical rule: size to expected peak plus 30-50% margin, plan refresh at 5 years considering growth, and always verify with your enterprise software vendor what specs they formally validate for your case.

Quick takeaways

  • No single answer — depends on workload (file/print, ERP, virtualization, AD/email)
  • Practical rule: size to expected peak + 30-50% margin for 5-year growth
  • CPU measured in cores and clock — most enterprise workloads need 8-16 cores
  • RAM and storage are usually the real bottlenecks — over-provision conservatively
  • Verify with your software vendor (ERP, database) what specs they formally validate

Why "100 users" is not a spec

The most common mistake when sizing is thinking of "user count" as the single variable. One hundred users using only email and shared files have very different demand from one hundred users querying transactional ERP with a 500 GB database. Correct sizing requires first defining the workload — what the server does — before calculating CPU, RAM or storage.

The 4 typical profiles for ~100-user companies

Profile 1: File + print + Active Directory server

The most common case for SMBs. The server handles authentication, shared files, print queues, and DNS/DHCP services.

  • CPU: 4-8 physical cores (Intel Xeon Bronze/Silver or AMD EPYC entry)
  • RAM: 16-32 GB
  • Storage: 2-4 TB in RAID 5 or RAID 10. SSD mix for OS (240-480 GB) + HDD for data
  • IOPS: low. SSD for OS is enough
  • Network: 2x 1 Gbps ports or 1x 10 Gbps if you have a capable switch

Profile 2: ERP/accounting server (small to mid)

Systems like SAP Business One, Microsoft Dynamics SMB, Odoo. Transactional database with typical office queries.

  • CPU: 8-16 physical cores with high clock (Intel Xeon Silver/Gold, AMD EPYC mid). Yes, clock matters — traditional databases don't scale linearly with cores
  • RAM: 32-64 GB. Database wants large cache
  • Storage: 1-2 TB NVMe in RAID 1 (mirror) or RAID 10. NVMe is non-negotiable for modern transactional database
  • IOPS: medium-high. Verify the ERP vendor's SLA — some require specific minimums
  • Network: 10 Gbps recommended

Profile 3: Virtualization server (VMware/Hyper-V host with 5-15 VMs)

Consolidation: instead of 5 physical servers, one with 5 VMs running each workload. Dominant model in modern SMBs.

  • CPU: 16-32 physical cores with hyperthreading. Consider at least 1 physical core per critical VM plus 2 cores for the hypervisor
  • RAM: 128-256 GB. Memory is typically the most limiting resource for VM density
  • Storage: 4-8 TB NVMe RAID 10 or shared storage (SAN/NAS) with 10 Gbps. For production, shared storage enables real HA
  • IOPS: high. Sum demand of all VMs
  • Network: 2x 10 Gbps (one for management/VM traffic, another for storage)

Profile 4: Email + collaboration server (Exchange On-Premise)

Less common with migration to Microsoft 365 / Google Workspace, but exists in regulated sectors.

  • CPU: 8-16 physical cores
  • RAM: 64-128 GB (Exchange consumes RAM aggressively)
  • Storage: 4-10 TB. Typical mailboxes + archive. Mix SSD for active DBs + HDD for archive
  • IOPS: medium. Peak during backups and migrations
  • Network: 10 Gbps

How to calculate real sizing

CPU

Estimate base load per user in the workload. For typical office, 0.05-0.1 virtual cores per simultaneously active user during peak hours is reference. For 100 users, that means 5-10 vCPU base + OS overhead + services + headroom for peaks = typically 8-16 physical cores with hyperthreading.

RAM

OS base RAM (4-8 GB Windows Server / Linux) + RAM per application (varies massively: SQL Server wants 25%+ of DB size, Exchange wants 4-8 GB per thousand active mailboxes) + overhead. Conservative over-provisioning is always better: adding RAM after purchase is relatively easy, but discovering RAM is short after migrating production is painful.

Storage

Raw capacity + 30-40% for growth + RAID overhead. For 100 users with typical office files, consider 20-50 GB per user for personal/shared data. ERP usually starts at 50-200 GB and grows 30-50% per year. For virtualization, sum capacity of each VM plus hypervisor overhead (10-15%).

IOPS and latency

The most underestimated parameter. Modern SSD/NVMe deliver 50K-1M IOPS depending on quality. HDD delivers 100-200 IOPS per disk. For transactional database, low IOPS = slow application regardless of how many cores you have. For critical production, NVMe always.

Common sizing mistakes

  • Comparing only CPU/RAM specs: ignoring storage IOPS is the most expensive mistake. A server with 32 cores and 128 GB but slow disks performs worse than one with 16 cores, 64 GB and NVMe
  • Not considering growth: sizing to current peak without margin leads to forced refresh in 2-3 years instead of 5
  • Ignoring hypervisor overhead: in virtualization, reserve 10-15% of CPU and RAM for the hypervisor
  • Not verifying with software vendor: ERPs and databases have formal requirements — installing out of spec can invalidate vendor support
  • Massive over-provisioning: the opposite also exists — buying server for 500 users when you expect 100 is waste that shows at 5 years when refresh arrives and you never used it

Bottom line

Sizing a server for 100 users depends on workload. For file/print + AD: 4-8 cores, 16-32 GB RAM, 2-4 TB storage. For ERP/database: 8-16 cores, 32-64 GB RAM, mandatory NVMe. For virtualization with 5-15 VMs: 16-32 cores, 128-256 GB RAM, NVMe RAID 10 or shared storage. For Exchange On-Premise: 8-16 cores, 64-128 GB RAM.

Non-negotiable rules: size to expected peak + 30-50% margin, NVMe for transactional production, verify formal specs with your software vendor, consider 5-year refresh. And the master rule: before buying, validate sizing with your integrator or a certified engineer in the stack you will operate — the cost of getting sizing wrong far exceeds the cost of initial consulting.

Frequently asked questions

Should I buy a bigger server "to be safe" or size tightly?

Over-provisioning 30-50% for growth is reasonable; over-provisioning 200-300% is waste. The rule: size to expected peak in 3-year horizon (not day one), add margin for growth and unforeseen peaks, verify the chosen model allows in-place RAM and storage expansion without replacing the entire server. The ability to grow is more valuable than buying idle capacity.

Do I need redundancy (RAID, redundant power, etc.)?

For enterprise production, yes: RAID 1, 10 or 5 (not RAID 0), hot-swap redundant power supplies, and at least 2 NICs in bonding. Without redundancy, a single failure takes down operations. The additional cost of redundancy is low (~10-15%) compared to downtime cost when a single component fails.

One big server or several small ones?

Several small ones give you resilience (if one fails, others continue) and allow workload distribution. One big one is more efficient in unit cost and power consumption. For SMBs with 100 users, 2 medium servers with virtualization and HA is usually optimal: balance between redundancy, efficiency and operational simplicity.

How do I know if I need SAN or local storage?

Local storage (internal SSD/NVMe) is simpler and cheaper; enough for one or two servers with little need for migration between hosts. Shared SAN or NAS enables real HA (if a host fails, its VMs come up on another), but adds cost and complexity. For environments of 3+ virtualized servers with high criticality, shared storage justifies the investment.

How long should a well-sized enterprise server last?

Physically, 7-10 years in datacenter with good conditions. Functionally, 5-6 years before becoming obsolete vs new generations of hypervisors and workloads. If you size with 30-50% margin, 5 years are comfortable. If you size tight or with little margin, it will be small before that and forced refresh in 3-4 years.

27 years keeping operations running for companies that can't afford to stop.

Grupo Modelo, FEMSA, Bayer, Chedraui and Hertz trust SCRAM. Let's talk about your project.

Request a consultation