2805 Bowers Ave, Santa Clara, CA 95051 | 408-730-2275
sales@colfax-intl.com
My Colfax  

Colfax CX1265i-NVMe4-S12-X8 1U Rackmount Server

 
 
  • 2x 3rd Gen Intel® Xeon® Scalable Processors
  • 32x DIMMS Support DDR4 RDIMM/LRDIMM and Intel® Optane™ Persistent Memory 200 Series Modules
  • 12x 2.5" Gen4 U.2 NVMe Drive Bays
  • Intel® DSG System Debug Log Advisor (SDLA) New

Hardware Features

  • 2x 3rd Gen Intel® Xeon® Scalable Processors
  • Intel C621A Chipset
  • 32x DIMMS Support DDR4 RDIMM/LRDIMM and Intel® Optane™ Persistent Memory 200 Series Modules
  • 12x 2.5" Gen4 U.2 NVMe Drive Bays
  • 2x M.2 NVMe SSDs
  • Riser Slot #1:
    - 1x PCIe Slot Riser Card Supporting 1x PCIe 4.0 x16 Low-Profile/Half-Length, Single-Width Slot
  • Riser Slot #2:
    - 1x PCIe Slot Riser Card Supporting 1x PCIe 4.0 x16 Low-Profile/Half-Length, Single-Width Slot
    - 1x NVMe Riser Card Supporting 1x PCIe 4.0 x16 Low-Profile/Half-Length, Single-Width Slot + PCIe 4.0 x8 NVMe SlimSAS Connector with Re-Timer
  • Riser Slot #3:
    - NVMe Riser Card Supporting (Two) – PCIe NVMe SlimSAS Connectors
  • 1x OCP Gen3 4.0 x16 Mezzanine Slot Supports Intel® Ethernet Network Adapter
  • Integrated Video Controller
  • Server Management:
    - Integrated Baseboard Management Controller (BMC)
    - Intelligent Platform Management Interface (IPMI) 2.0 Compliant
    - Redfish* Compliant
    - Support for Intel® Data Center Manager (DCM)
    - Support for Intel® Server Debug and Provisioning Tool (SDPTool)
    - Dedicated RJ45 1 GbE Management Port
    - Light Guided Diagnostics
  • Intel® DSG System Debug Log Advisor (SDLA) New
    - Enables customers to quickly and easily identify and resolve for themselves common server support issues
  • 1x 1300W or 1600W AC 80+ Titaniumum Efficiency Power Supply*
    * The system can have up to two power supply modules installed, supporting the following power configurations: 1+0, 1+1 redundant power, and 2+0 combined power
  • Optional Intel® Trusted Platform Module 2.0


Optional Features

Rack Mount Kit Options

  • Value Rack Mount Rail Kit (CYPHALFEXTRAIL):
    - 1U, 2U compatible
    - Tool-less chassis attachment
    - Tools required to attach rails to rack
    - Rack installation front and rear post distance adjustment from 660 mm to 838 mm
    - 560 mm travel distance
    - Half extension from rack
    - Support for front cover removal and fan replacement
    - 31 kg (68.34 lbs.) maximum support weight
    - No Cable Management Arm support
  • Premium Rail Kit with Cable Management Arm (CMA) Support (CYPFULLEXTRAIL):
    - 1U, 2U compatible
    - Tool-less installation
    - Rack installation front and rear post distance adjustment from 623 mm ~ 942 mm
    - 820 mm travel distance
    - Full extension from rack
    - 31 Kgs (68.34 lbs.) maximum supported weight
    - Support for Cable Management Arm AXXCMA2


PCIe Add-in Card Support

The server system supports a variety of riser card options for add-in card support as well as to enhance the base feature set of the system. These riser cards are available as accessory options for the server system. The system provides concurrent support for up to four PCIe riser cards, including one PCIe Interposer riser card and up two NVMe riser cards.

PCI Express Bifurcation

The server system supports riser cards through riser slots identified as Riser Slot #1, Riser Slot #2, Riser Slot #3, and PCI* Interposer Riser Slot. The PCIe* data lanes for Riser Slot #1 are supported by CPU 0. The PCIe* data lanes for Riser Slot #2, Riser Slot #3, and PCI* Interposer Riser Slot are supported by CPU 1. A dual processor configuration is required when using Riser Slot #2, Riser Slot #3, or PCI* Interposer Riser Slot.

The system supports the following PCIe bifurcation:

  • Add-in card slot on 1-Slot PCIe* riser card (iPC – CYP1URISER1STD) for Riser Slot #1:
    - x16/x8x8/x8x4x4/x4x4x8/x4x4x4x4
  • Add-in card slot on 1-Slot PCIe* riser card (iPC – CYP1URISER2STD) for Riser Slot #2:
    - x16/x8x8/x8x4x4/x4x4x8/x4x4x4x4
  • Add-in card slot on 2-Slot PCIe* Riser Card (iPC – CYP1URISER2KIT) for Riser Slot #2:
    - x16/x8x8/x8x4x4/x4x4x8/x4x4x4x4

Note: Riser Slot #3 does not support add-in cards.



PCIe Riser Card for Riser Slot #1 (iPC – CYP1URISER1STD)

The One-Slot PCIe riser card, shown above, supports one low profile, half length, single width add-in card


PCIe Riser Card for Riser Slot #2 (iPC – CYP1URISER2STD)

The One-Slot PCIe riser card, shown above, supports one low profile, half length, single width add-in card


PCIe NVMe Riser Card for Riser Slot #2 (iPC – CYP1URISER2KIT)

The PCIe NVMe riser card, shown above, has two connectors. The connector labeled "Slot1_PCIe_x16" supports one low profile half length, single-width add-in card. The x8 PCIe* SlimSAS* connector is used to provide PCIe* data lanes to the PCIe* interposer riser card. This connector does not support connection to the NVMe drives in the front drive bay.

The Intel accessory kit (iPC – CYP1URISER2KIT) includes the PCIe interposer riser card, PCIe NVMe riser card, and PCIe interposer cable.

PCIe Interposer Card (iPC – CYP1URISER2KIT)

The PCIe Interposer Riser Slot and PCIe* interposer riser card were designed to provide additional add-in card support for the server system. The PCIe interposer riser card shown in the following figure is an accessory option supported by the PCIe* Interposer Riser Slot.

This card has one PCIe add-in card slot (x8 electrical, x8 mechanical) labeled "Slot1_PCIe_x8" that supports one low profile, half length, single-width add-in card. This card also has one x8 PCIe NVMe* SlimSAS* connector labeled "Slot1_PCIe_AIC_Interposer". The Interposer card’s functionality depends on the PCIe NVMe riser card in Riser Slot #2. The x8 PCIe data lanes used by the PCIe add-in card slot are routed by an interface cable from Intel PCIe NVMe riser card (accessory option) plugged into Riser Slot #2. To use the interposer card, the PCIe NVMe SlimSAS connector on the PCIe interposer riser card must be connected to the PCIe NVMe SlimSAS connector (PCIe_SSD_0-1) on the NVMe riser card using the PCIe* interposer cable


PCIe NVMe Riser Card for Riser Slot #3 (iPC – CYPRISER3RTM)

The server system supports one NVMe riser card option for front drive bay support. The Two-Slot PCIe NVMe Riser Card, shown in the following figure, supports two x8 PCIe SlimSAS connectors labeled "PCIe_SSD_0-1" and "PCIe_SSD_2-3". Each connector supports up to two NVMe SSDs in the front drive bay through a backplane.




Intel® Trusted Platform Module (TPM) 2.0

A TPM is a hardware-based security device that addresses the growing concern on boot process integrity and offers better data protection. TPM protects the system start-up process by ensuring it is tamper-free before releasing system control to the operating system. A TPM device provides secured storage to store data, such as security keys and passwords. In addition, a TPM device has encryption and hash functions.

AXXTPMENC8 implements TPM as per TPM PC Client specifications revision 2.0 by the Trusted Computing Group (TCG)



Intel, the Intel logo, Xeon, and Xeon Inside are trademarks or registered trademarks of Intel Corporation in the U.S. and other countries.

System Memory

Overview

The server system supports standard DDR4, RDIMMs and LDRIMMs, and Intel® Optane™ persistent memory 200 series modules. It can be populated with a combination of both DDR4 DRAM DIMMs and Intel® Optane™ persistent memory 200 series modules.

Intel® Optane™ PMem (persistent memory) is an innovative technology that delivers a unique combination of affordable large memory capacity and data persistence (non-volatility). It represents a new class of memory and storage technology architected specifically for data center usage. Intel® Optane™ PMem 200 series enables higher density (capacity per DIMM) DDR4-compatible memory modules with near-DRAM performance and advanced features not found in standard SDRAM. The module supports the following operating modes:

  • Memory mode (MM)
  • App Direct (AD) mode

Intel® Optane™ Persistent Memory 200 Series Module – Memory Mode (MM)
In Memory mode, the standard DDR4 DRAM DIMM acts as a cache for the most frequently accessed data, while Intel® Optane™ persistent memory 200 series modules provide large memory capacity by acting as direct load/store memory. In this mode, applications and operating system are explicitly aware that the Intel® Optane™ persistent memory 200 series is the only type of direct load/store memory in the system. Cache management operations are handled by the integrated memory controller on the Intel® Xeon® Scalable processors. When data is requested from memory, the memory controller first checks the DRAM cache. If the data is present, the response latency is identical to DRAM. If the data is not in the DRAM cache, it is read from the Intel® Optane™ persistent memory 200 series modules with slightly longer latency. The applications with consistent data retrieval patterns that the memory controller can predict, will have a higher cache hit rate. Data is volatile in Memory mode. It will not be saved in the event of power loss. Persistence is enabled in App Direct mode.

Intel® Optane™ Persistent Memory 200 Series Module – App Direct (AD) Mode
In App Direct mode, applications and the operating system are explicitly aware that there are two types of direct load/store memory in the platform. They can direct which type of data read or write is suitable for DRAM DIMM or Intel® Optane™ persistent memory 200 series modules. Operations that require the lowest latency and do not need permanent data storage can be executed on DRAM DIMM, such as database "scratch pads". Data that needs to be made persistent or structures that are very large can be routed to the Intel® Optane™ persistent memory. The App Direct mode must be used to make data persistent in memory. This mode requires an operating system or virtualization environment enabled with a persistent memory-aware file system.

App Direct mode requires both driver and explicit software support. To ensure operating system compatibility, visit https://www.intel.com/content/www/us/en/architecture-and-technology/optanememory.html


Intel® Optane™ Persistent Memory 200 Series Module Rules

All operating modes:

  • Only Intel® Optane™ persistent memory 200 series modules are supported
  • Intel® Optane™ persistent memory 200 series modules of different capacities cannot be mixed within or across processor sockets
  • Memory slots supported by the integrated memory controller 0 (IMC 0) (memory channels A and B) of a given processor must be populated before memory slots on other IMCs
  • For multiple DIMMs per channel:
    • Only one Intel® Optane™ persistent memory 200 series module is supported per memory channel
    • Intel® Optane™ persistent memory 200 series modules are supported in either DIMM slot when mixed with LRDIMM or 3DS-LRDIMM
    • Intel® Optane™ persistent memory 200 series modules are only supported in DIMM slot 2 (black slot) when mixed with RDIMM or 3DS-RDIMM
  • No support for SDRAM SRx8 DIMM that is populated within the same channel as the Intel® Optane™ persistent memory 200 series module in any operating mode
  • Ensure the same DDR4 DIMM type and capacity is used for each DDR4 + Intel® Optane™ persistent memory 200 series module combination

Memory mode:

  • Populate each memory channel with at least one DDR4 to maximize bandwidth
  • Intel® Optane™ persistent memory 200 series modules must be populated symmetrically for each installed processor (corresponding slots populated on either side of the processor)

Memory mode:

  • Minimum of one DDR4 DIMM per IMC (IMC 0, IMC 1, IMC 2 and IMC 3) for each installed processor
  • Minimum of one Intel® Optane™ persistent memory 200 series module for the board
  • Intel® Optane™ persistent memory 200 series modules must be populated symmetrically for each installed processor (corresponding slots populated on either side of the processor)

Notes on Intel® Optane™ persistent memory 200 series module population:

  • For MM, recommended ratio of standard DRAM capacity to Intel® Optane™ persistent memory 200 series module capacity is between 1 GB:4 GB and 1 GB:16 GB
  • For each individual population, rearrangements between channels are allowed as long as the resulting population is consistent with defined memory population rules
  • For each individual population, the same DDR4 DIMM must be used in all slots, as specified by the defined memory population rules

Server Management

Overview

The server uses the baseboard management controller (BMC) features of an ASpeed* AST2500 server management processor. The BMC supports multiple system management features including intra-system sensor monitoring, fan speed control, system power management, and system error handling and messaging. It also provides remote platform management capabilities including remote access, monitoring, logging, and alerting features.

In support of system management, the system includes a dedicated management port and support for two system management tiers and optional system management software.

  • Standard management features (Included)
  • Advanced management features (Optional)
  • Intel® Data Center Manager (DCM) support (Optional)

Remote Management Port
The server board includes a dedicated 1 Gb/s RJ45 management port used to access embedded system management features remotely.

Standard System Management Features
The following system management features are supported by default.

  • Virtual KVM over HTML5
  • Integrated BMC Web Console
  • Redfish
  • IPMI 2.0
    • Node Manager
  • Out-of-band BIOS/BMC Update and Configuration
  • System Inventory
  • Autonomous Debug Log

Advanced Management Features
Advanced manageability features are supported over all NIC ports enabled for server manageability. This includes baseboard integrated BMC-shared NICs, which share network bandwidth with the host system, as well as the LAN channel provided by the onboard Intel® Dedicated Server Management NIC.

  • Software Key to enable features
  • Included single system license for Intel® Data Center Manager (Intel® DCM)
    • Intel® Data Center Manager (Intel® DCM) is a software solution that collects and analyzes the real-time health, power, and thermals of a variety of devices in data centers helping you improve the efficiency and uptime.
  • Virtual Media Image Redirection (HTML5 and Java*)
  • Virtual Media over network share and local folder
  • Active Directory support
  • Full system firmware update including drives, memory, and RAID (Tentative Availability Q4 2021)
  • Storage and network device monitoring (Tentative Availability Q4 2021)
  • Out-of-band hardware RAID Management for latest Intel® RAID cards (Tentative Availability Q4 2021)

More Information
Download Integrated BMC Web Console User Guide


Intel® Data Center Manager (Intel® DCM)

Intel® DCM is a solution for out-of-band monitoring and managing the health, power, and thermals of servers and a variety of other types of devices.

What can you do with Intel® DCM?

  • Automate health monitoring
  • Improve system manageability
  • Simplify capacity planning
  • Identify underutilized servers
  • Measure energy use by device
  • Pinpoint power/thermal issues
  • Create power-aware job scheduling tasks
  • Increase rack densities
  • Set power policies and caps
  • Improve data center thermal profile
  • Optimize application power consumption
  • Avoid expensive PDUs and smart power strips

More Information
Download Intel® Data Center Manager Product Brief
Download Intel® Data Center Manager Console User Guide



Intel® DCM Use Cases

Rack Provisioning

Find new ways to increase rack density.

Intelligent Power

Collect real time data without deploying costly redundant infrastructure by replacing intelligent power distribution units.

Disaster Avoidance

With real-time monitoring and management, it's possible to reduce power failures and other disasters.

Equipment Scheduling

Increase your ability to meet workload demands with equipment scheduling, and make your data center do more.

Build Real-Time Thermal Maps

Build real-time thermal maps to avoid the guesswork that leads to undercooling or overcooling.

Ghost Servers

Identify ghost servers, and get data center power usage under control.

Intel® Power Thermal Aware Solution

Identify energy efficiency issues in the data center to avoid service delays and gain savings.

Granular Rack-Level Thermal Monitoring

Enable the Intel® DCM to recognize an out-of-range temperature reading and allow the user to take immediate action.

Granular Server-Level Thermal Monitoring

Get greater granular server-level thermal visibility, so when temperatures rise, it registers with the Intel® DCM.

Predictive Detection of Cooling Anomalies

Predict cooling issues before they happen with a patented algorithm that detects anomalies in time to be resolved before a thermal issue occurs.

Server health Management

Enable server health management with real-time sub-component monitoring, error detection, proactive health management, and server firmware synchronization.

Updating Firmware of Intel® Data Center Blocks

Monitor and update the firmware of data center systems with Intel® DCM.

Intel® Memory Failure Prediction

Through multi-dimensional model and algorithms, DIMM errors are mined at the micro-level to assign health scores and identify future failures in real time.

Technical Specifications

Dimensions (HxWxL) • 1U Rackmount
• 1.7" x 17.2" x 30.7"
• 43mm x 438mm x 781mm
CPU • Dual Socket-P4 LGA4189
• Support for 3rd Gen Intel® Xeon® Scalable Processors
• Max TDP up to 205 W
• UPI links: up to three at 11.2 GT/s (Platinum and Gold families) or up to two at 10.4 GT/s (Silver family)

Note: Supported 3rd Gen Intel® Xeon® Scalable processor SKUs must Not end in (H), (L), (U), or (Q). All other processor SKUs are supported.
Chipset • Intel® C621A
Memory • 32x DIMM slots
- 16 DIMM slots per processor, eight memory channels per processor
- Two DIMMs per channel
• Supports Registered DDR4 (RDIMM), 3DS-RDIMM, Load Reduced DDR4 (LRDIMM), 3DS-LRDIMM
• Intel® Optane™ persistent memory 200 series
• Memory capacity
- Up to 6 TB per processor (processor SKU dependent)
• Memory data transfer rates
- Up to 3200 MT/s at one or two DIMMs per channel (processor SKU dependent)
• DDR4 standard voltage of 1.2V
Riser Support Concurrent support for up to four riser cards, including one PCIe Interposer riser card with support for up to three PCIe add-in cards. In the below description HL = Half Length, LP = Low Profile

Riser Slot #1:
• Riser Slot #1 supports x16 PCIe lanes routed from CPU 0
• PCIe 4.0 support for up to 32 GB/s

Riser Slot #1 supports the following Intel Riser Card option:
• One PCIe slot Riser card supporting (one) – LP/HL, single-width slot (x16 electrical, x16 mechanical) iPC – CYP1URISER1STD

Riser Slot #2:
• Riser Slot #2 supports X24 PCIe lanes routed from CPU 1
• PCIe 4.0 support for up to 32 GB/s

Riser Slot #2 supports the following Intel Riser Card option:
• One PCIe slot Riser card supporting (one) – LP/HL, single-width slot (x16 electrical, x16 mechanical) iPC – CYP1URISER2STD
• NVMe Riser card supporting (one) – LP/HL, single-width slot (x16 electrical, x16 mechanical) + (one) -x8 PCIe NVMe SlimSAS connector with re-timer. Included in iPC – CYP1URISER2KIT

PCIe Interposer Riser Slot (requires PCIe NVMe riser card in Riser Slot #2)
• PCIe Interposer Riser Slot supports the PCIe interposer riser card as an accessory option. This card supports one PCIe add-in card (x8 electrical, x8 mechanical). The PCIe interposer riser card can be used only when it is connected to the PCIe NVMe riser card in Riser Slot #2. The interposer card uses x8 PCIe data lanes routed from the PCIe SlimSAS connector on the PCIe NVMe riser card. The Intel accessory kit includes the PCIe interposer riser card, PCIe NVMe riser card, and PCIe interposer cable. iPC – CYP1URISER2KIT

Riser Slot #3:
• Riser Slot #3 supports x16 PCIe* lanes routed from CPU 1
• PCIe 4.0 support for up to 32 GB/s

Riser Slot #3 supports the following Intel Riser Card option:
• NVMe riser card supporting (two) – PCIe NVMe SlimSAS connectors iPC – CYPRISER3RTM

Note: Riser Slot #3 does not support add-In cards
Open Compute Project (OCP) Adapter Support Onboard x16 PCIe 4.0 OCP 3.0 Mezzanine connector (Small Form-Factor) supports the following Intel accessory options:
• Dual port, RJ45, 10/1 GbE - iPC- X710T2LOCPV3
• Quad port, SFP+ DA, 4x 10 GbE - iPC- X710DA4OCPV3
• Dual Port, QSFP28 100/50/25/10 GbE - iPC- E810CQDA2OCPV3
• Dual Port, SFP28 25/10 GbE - iPC-E810XXVDA2OCPV3
PCIe NVMe Support • Support for up to 10 PCIe NVMe Interconnects
- Eight server board SlimSAS connectors, four per processor
- Two M.2 NVMe/SATA connectors
• Additional NVMe support through select Riser Card options (See Riser Card Support)
SATA • 10 x SATA III ports (6 Gb/s, 3 Gb/s and 1.5 Gb/s transfer rates supported)
- Two M.2 connectors – SATA / PCIe
- Two 4-port Mini-SAS HD (SFF-8643) connectors
USB • Three USB 3.0 connectors on the back panel
• One USB 3.0 and one USB 2.0 connector on the front panel
• One USB 2.0 internal Type-A connector
Serial • One external RJ-45 Serial Port A connector on the back panel
• One internal DH-10 Serial Port B header for optional front or rear serial port support. The port follows the DTK pinout specifications
Video • Integrated 2D video controller
• 128MB of DDR4 video memory
• One VGA DB-15 external connector in the back
Server Management • Integrated Baseboard Management Controller
• Intelligent Platform Management Interface (IPMI) 2.0 Compliant
• Redfish* Compliant
• Support for Intel® Data Center Manager (DCM)
• Support for Intel® Server Debug and Provisioning Tool (SDPTool)
• Dedicated RJ45 1 GbE Management Port
• Light Guided Diagnostics
Security Support • Intel® Platform Firmware Resilience (Intel® PFR) technology with an I2C interface
• Intel® Software Guard Extensions (Intel® SGX) • Intel® CBnT – Converged Intel® Boot Guard and Trusted Execution Technology (Intel® TXT) • Intel® Total Memory Encryption (Intel® TME) • Trusted platform module 2.0 – iPC J33567-151 (accessory option)
Storage Bay • 12 x 2.5" SAS/SATA/NVMe* hot swap drive bays
Power Supply The server system can have up to two power supply modules installed, supporting the following power configurations: 1+0, 1+1 redundant power, and 2+0 combined power

• 1 x 1300W or 1600W AC power supply
• 80 Plus Titanium
System Fans • Eight managed 40 mm hot swap capable system fans
• Integrated fans included with each installed power supply module
BIOS • Unified Extensible Firmware Interface (UEFI)-based BIOS (legacy boot not supported


Block Diagram


Images


$5,021.79
 

  Qty.Colfax CX1265i-NVMe4-S12-X8 1U Rackmount Server, Cost As Configured $5,021.79
Base Platform 1
Front Bezel 1
Management 1
Power Supply 1    1

Set next    items (  ) like this
Power Supply 2    1
Rackmount Kit 1
Cable Management 1
Primary CPU 1
Secondary CPU 1
CPU 1 - Socket 1 of 16    1

Set next    items (  ) like this
CPU 1 - Socket 2 of 16    1
CPU 1 - Socket 3 of 16    1
CPU 1 - Socket 4 of 16    1
CPU 1 - Socket 5 of 16    1
CPU 1 - Socket 6 of 16    1
CPU 1 - Socket 7 of 16    1
CPU 1 - Socket 8 of 16    1
CPU 1 - Socket 9 of 16    1
CPU 1 - Socket 10 of 16    1
CPU 1 - Socket 11 of 16    1
CPU 1 - Socket 12 of 16    1
CPU 1 - Socket 13 of 16    1
CPU 1 - Socket 14 of 16    1
CPU 1 - Socket 15 of 16    1
CPU 1 - Socket 16 of 16    1
CPU 2 - Socket 1 of 16    1

Set next    items (  ) like this
CPU 2 - Socket 2 of 16    1
CPU 2 - Socket 3 of 16    1
CPU 2 - Socket 4 of 16    1
CPU 2 - Socket 5 of 16    1
CPU 2 - Socket 6 of 16    1
CPU 2 - Socket 7 of 16    1
CPU 2 - Socket 8 of 16    1
CPU 2 - Socket 9 of 16    1
CPU 2 - Socket 10 of 16    1
CPU 2 - Socket 11 of 16    1
CPU 2 - Socket 12 of 16    1
CPU 2 - Socket 13 of 16    1
CPU 2 - Socket 14 of 16    1
CPU 2 - Socket 15 of 16    1
CPU 2 - Socket 16 of 16    1
M.2 Drive 1    1

Set next    items (  ) like this
M.2 Drive 2    1
NVM-E Drive 1    1

Set next    items (  ) like this
NVM-E Drive 2    1
NVM-E Drive 3    1
NVM-E Drive 4    1
NVM-E Drive 5    1
NVM-E Drive 6    1
NVM-E Drive 7    1
NVM-E Drive 8    1
NVM-E Drive 9    1
NVM-E Drive 10    1
NVM-E Drive 11    1
NVM-E Drive 12    1
NVM-E Cable Kit (1-8) 1
NVM-E Retimer (9-12) 1
NVM-E Cable Kit (9-12) 1
Intel VROC 1
Riser Card 2 1
OCP 3.0 Networking 1
Infiniband HBA 1
Ethernet HBA 1
TPM Module 1
Operating System SW 1
   Colfax CX1265i-NVMe4-S12-X8 1U Rackmount Server, Cost As Configured $5,021.79

    

$5,021.79
 

  Qty.CX1265i-NVMe4-OPT-S12-X8 1U Rackmount Server, Cost As Configured $5,021.79
Base Platform 1
Front Bezel 1
Management 1
Power Supply 1    1

Set next    items (  ) like this
Power Supply 2    1
Rackmount Kit 1
Cable Management 1
Primary CPU 1
Secondary CPU 1
CPU 1 IMC0 CHA Slot 1    1

Set next    items (  ) like this
CPU 1 IMC0 CHB Slot 1    1
CPU 1 IMC1 CHC Slot 1    1
CPU 1 IMC1 CHD Slot 1    1
CPU 1 IMC2 CHE Slot 1    1
CPU 1 IMC2 CHF Slot 1    1
CPU 1 IMC3 CHG Slot 1    1
CPU 1 IMC3 CHH Slot 1    1
CPU 1 IMC0 CHA Slot 2    1

Set next    items (  ) like this
CPU 1 IMC0 CHB Slot 2    1
CPU 1 IMC1 CHC Slot 2    1
CPU 1 IMC1 CHD Slot 2    1
CPU 1 IMC2 CHE Slot 2    1
CPU 1 IMC2 CHF Slot 2    1
CPU 1 IMC3 CHG Slot 2    1
CPU 1 IMC3 CHH Slot 2    1
CPU 2 IMC0 CHA Slot 1    1

Set next    items (  ) like this
CPU 2 IMC0 CHB Slot 1    1
CPU 2 IMC1 CHC Slot 1    1
CPU 2 IMC1 CHD Slot 1    1
CPU 2 IMC1 CHE Slot 1    1
CPU 2 IMC1 CHF Slot 1    1
CPU 2 IMC1 CHG Slot 1    1
CPU 2 IMC1 CHH Slot 1    1
CPU 2 IMC0 CHA Slot 2    1

Set next    items (  ) like this
CPU 2 IMC0 CHB Slot 2    1
CPU 2 IMC0 CHC Slot 2    1
CPU 2 IMC0 CHD Slot 2    1
CPU 2 IMC0 CHE Slot 2    1
CPU 2 IMC0 CHF Slot 2    1
CPU 2 IMC0 CHG Slot 2    1
CPU 2 IMC0 CHH Slot 2    1
M.2 Drive 1    1

Set next    items (  ) like this
M.2 Drive 2    1
NVM-E Drive 1    1

Set next    items (  ) like this
NVM-E Drive 2    1
NVM-E Drive 3    1
NVM-E Drive 4    1
NVM-E Drive 5    1
NVM-E Drive 6    1
NVM-E Drive 7    1
NVM-E Drive 8    1
NVM-E Drive 9    1
NVM-E Drive 10    1
NVM-E Drive 11    1
NVM-E Drive 12    1
NVM-E Cable Kit (1-8) 1
NVM-E Retimer (9-12) 1
NVM-E Cable Kit (9-12) 1
Intel VROC 1
Riser Card 2 1
OCP 3.0 Networking 1
Infiniband HBA 1
Ethernet HBA 1
TPM Module 1
Operating System SW 1
   CX1265i-NVMe4-OPT-S12-X8 1U Rackmount Server, Cost As Configured $5,021.79