Thursday, April 23, 2015

Isilon Command Cheat Sheet

Command Sub-Command Description
man isi To check the man page of isi command
isi license To check the installed license
isi license activate <key> To install the license
isi config
isi devices
isi get -a /ifs/data Will give overall file system layout
isi --help
isi networks For internal and external network configuration
isi devices -a add -d <device number> To add the device back to cluster
isi devices -a format -d <device number> Often need to format the drive for OneFS use first
isi_for_array -s isi_hw_status | grep -i 'power sup' To check power supply status
isi_hw_status | grep SerNo To check the serial number of node
isi auth ldap list To list ldap details
isi nfs exports Manage NFS exports
netgroup Manage netgroup caching
nlm Manage NFS NLM sessions, locks, and waiters
settings Manage NFS default export and global protocol settings
isi quota quotas Manage quotas
reports Manage quota reports
settings Manage general quotas settings
isi services To check Isilon service
Isi services <service name> disable To disable any service
Isi services <service name> enable To enable any service
isi set
isi smb log-level  Configure the log level
openfiles List and close open SBM files
sessions List and disconnect SMB sessions
settings Manage SMB default share and global protocol settings
shares Manage SMB shares and their permissions
isi statistics heat Heat mode displays most active /ifs paths for a variety of metrics
pstat Pstat mode displays a selection of cluster-wide and protocol data
drive Drive mode shows peformance by drive
list List valid arguments to given option
isi snapshot locks manage snapshot locks
schedules Manage scheduled creation of snapshots
settings Manage snapshot settings
snapshots Manage file system snapshots.
isi status To check cluster status
isi statistics system --top --nodes --interval=2 To check the top node at every two seconds
isi statistics heat --classes=read,write To check the hottest files on the cluster
isi job To check the running jobs / **only one job can one run at a time**
LNN set Command to change the logical node number
# isi config Go to config mode first
# lnnset Gives the current status
# lnnset 3 7 Changes the LNN ID of node 3 from 3 to 7
isi distill
isi_distill Distill the essence of an IFS directory tree
Example: isi_distill -k /ifs/data
Example: isi_distill -k -o /tmp/distill.txt /ifs/data Allows to dump the output in a text file
Reboot the Isilon Node
# isi config To go to config mode
# reboot 3 To reboot node 3
# exit To exit from config mode
To check boot drive status:

gmirror status




Wednesday, April 8, 2015

Isilon SmartDedupe

Isilon SmartDedupe allows your cluster to be space efficient by removing redundant data ( it takes less amount of space to save files with similar blocks. SmartDedupe scans the cluster for identical data blocks and moves the single copy of redundant block to a hidden file called a shadow store. SmartDedupe then deletes the duplicated block and replaces the block with the pointer to the shadow store. Each shadow store consists of 255 blocks and each block can be referenced 32000 times.

Deduplication is applied to the directory level and files; targeting all files & directories underneath that directory.

Note: Dedupe should only be configured on data (folders) that are NOT compressed. Also don't configure dedupe for /ifs, always configure for the high level directory structure eg: /ifs/data/project_tango/

Steps to configure SmartDedupe via CLI on Isilon Cluster:

1) Check if the license is added or not as SmartDedupe is a licensed feature.

# isi license

2) Specify the file system or source directory:

# isi dedupe setting modify --path=/ifs/data/project1, /ifs/data/project2

 3) Schedule the deduplication job:

# isi job types Dedupe --schedule "Every Saturday at 01:00 AM"

4) Check the stats:

# isi dedupe stats

5) Once job is completed, you can check its report:

# isi dedupe reports list

# isi dedupe reports view <report number>

6) To check settings, use:

# isi dedupe settings view

7) To check the events, user:

# isi job events list --job-type dedupe



Happy Learning!






Saturday, April 4, 2015

XtremIO Architecture

XtremIO is an all-flash system, based on a scale-out architecture. The system uses building block, called X-Bricks, which can be clustered together to grow performance and capacity.

XtremIO can scale-out from one X-Brick to six X-Brick cluster. Scaling the cluster doesn't require any down time and can be done at anytime.


System operation is controlled via stand-alone dedicated linux-based server, called XMS (XtremIO management server). It can both physical or virtual. Array continues to operate even if the XMS gets disconnected from the array but looses its monitoring and configuration capabilities i.e data will be served and there won't any impact on performance but you will not be able to monitor or configure the array.



X-Brick is the fundamental block of the array. Each X-Bricks consists of:

1) 2U DAE (Disk array enclosure) containing 25 or 13 SSD, two PSUs, two SAN interconnect modules.
2) One battery backup unit
3) Two 1U Storage Controller, each controller consists of 2 PSUs, 2 8Gb/s FC ports, 2 10GbE iSCSI ports, 2 40Gb/s Infiniband ports and 1 1Gb/s management/IPMI port.


A single X-Brick cluster consist of 1 X-Brick and 1 additional battery backup unit

A mutiple X-Brick cluster consist two inifiband switch and dont have any additional backup unit.

XtremIO runs on customized Linux OS called XIOS. The storage controllers on each X-Brick own the DAE that is attached to them via redundant SAS interconnect. The SC also connect to redundant infiniband fabric switch.

XtremIO provides inline deduplicaiton and compress the data before it is written to SSDs. Below are some system features that are available and dont require additional licenses:

1) Thin provisioning
2) Inline data deduplication and compression
3) XDP
4) DRE
5) snapshots
6) VMware VAAI Integration

With XtremIO you can management of storage array is very easy as it doesn't require:
1) RAID configuration
2) Minimal sizing for cloning and snapshots
3) No tiering
4) No performance tuning

It is also highly integrated with other EMC products like VPLEX, Recover point, Openstack and powerpath.



Happy Learning!

Friday, April 3, 2015

VMAX Architecture

Symmetrix Vmax is EMC’s reputable product.Compared to the previous models, VMAX has been optimized for increased availability,performance and capacity utilization on all tiers with all RAID types.VMAX’s enhanced device configuration and replication operations results in easier,faster and more efficient management of large virtual and physical environment.
A EMC 
VMAX storage array support from 1 to maximum of 8 VMAX engines.

Each of these engines contains two symmetrix 
VMAX directors.Each director includes

                  multi-core CPUs (Cores per CPU/ per engine / per system)
                  – Cache memory(global memory)
                  – Front end I/O modules 
                  – Back end I/O modules
                  – System Interface Module(SIB)

Apart from this,each engine has redundant power supplies,cooling fans,standby power supplies(SPS) and environmental modules.


All these engines are interconnected usingVmax Matrix Interface Board Enclosure(MIBE).Each director has two connection to MIBE via system interface module(SIB) ports as shown below.




















Multi-core CPUs:

Multi-Core CPUs deliver new levels of performance and functionality in a smaller footprint with reduced power and cooling requirements..

Cache memory(global memory):

The Symmetrix VMAX array can be configured with up to 1TB of global memory (512GB protected). Memory is located on each director utilizing up to 8 DIMMS per director. Memory size considerations include the number of applications and replication requirements, as well as drive capacity, speed and protection. Engines can be configured with 32, 64, or 128 GB of physical memory. Global memory has maximum system bandwith of 192 GB/s. 



Memory is accessible by any director within the system:

◆ If a system has a single VMAX Engine, physical memory mirrors are internal to the enclosure.
◆ If a system has multiple VMAX Engines, physical memory mirrors are provided between enclosures.

Front End I/O Module :


Front end modules are used for host connectivity.Host connectivity via Fibre Channel, iSCSI and FICON are supported.


Back End I/O Module :
Back end module provide access to the disk drives.Disks drives are configured under these I/O modules.

System Interface Module(SIB):
SIBs are responsible for interconnecting the Vmax engine’s directors through  Matrix Interface Board Enclosure(MIBE).Each Vmax engine has two SIBs and each has two ports.

Similar to DMX3 and DMX4 arrays,Vmax has two types of bays

 1. System bay :
System bay contains all Vmax engines.Apart from Vmax engines,it contains system bay standby power supplies(SPS), Uninterrupted Power Supply(UPS),Matrix Interface Board Enclosure (MIBE), and a Server (Service Processor) with Keyboard- Video-Mouse (KVM) assembly.

 2. Storage bay :

Each storage bay can hold up to 16 drive enclosures (DEs) for a maximum of 240 3.5 inch drives per storage bay. The maximum system configuration of 2400 drives utilizing 10 storage bays. DEs are storage modules that contain drives, link control cards, and power & cooling components. All DE components are fully redudant and hot swappable. Each houses up to 15 drives. Each DE provides physical redundant connections to two seperate directors and redundant connections to "daisy-chained" DEs that extend the number of drives that are accessible per director port. The DE supports dual-ported, 4 Gb/s, back-end fiber interfaces.

Similar to system bay, each storage bay has redundant PDPs, two SPS, SPS can maintain power for two five-minute periods of AC loss, enabling the Symmetrix storage bay to shut down properly. All storage bays are fully pre-cabled and pre-tested from the factory to easily enable future growth.

VMAX Engine Front View :
Below is a 
VMAX engine front view.As described above,VMAX engines are located in VMAX system bay.We can see the power supplies located at two sides and cooling fan module located in middle.












Vmax Engine Rear View :
This example displays the rear view of the V-Max Engine.















As explained earlier each VMAX  Engine contains two  director boards named here as Odd and Even director, four Front End I/O Modules, four Back End I/O Modules and two System Interface Boards (SIB). The Back End I/O Modules are numbered as Module 0 and Module 1. The System Interface Boards are named as Modules 2 and 3. The Front End I/O Modules are numbered as Module 4 and Module 5.

The top director board combined with the left Front End I/O Modules 4 and 5 represents the even numbered director.The bottom director board combined with the right Front End I/O Modules 4 and 5 represents the odd numbered director. For example, if this is engine  4 the top director would be director number 8 and the bottom director would be director number 7.
VMAX Engine Port Assignment :
This is a typical Vmax port assignment diagram












Above diagram contains port assignment of System Interface Board, the Back End I/O Modules, and the Front End I/O Modules.

 As I explained earlier 
VMAX engines are interconnected using MIBE using System Interface Board ports Port A and Port B.Using these ports all directors communicate through the Virtual Matrix via redundant connections.

Each director within a 
VMAX  Engine contains two  Back End I/O Modules. Each Back End I/O Module has a single port, which holds a single Quad Small Form-Factor Pluggable (QSFP) connector. The QSFP connector cable contains 4 smaller cables ,each have a connection to four  Drive Enclosures, providing Back End Fibre Channel connectivity to the disk drives. On Back End I/O Module 0 these connections are designated as A0, A1, B0, and B1. On Back End I/O Module 1, these connections are designated as C0, C1, D0, and D1.

Each director also contains two Front End I/O Modules. The port designations on the Front End I/O Module will vary based on the interface type. This example represents four Fibre Channel Front End I/O Modules. In this ,configuration module 4 will contain ports E0, E1, F0, and F1. Module 5 will contain ports G0, G1, H0, and H1.

As we discussed previously, the left two Front End I/O Modules are connected to the even numbered director. If it is  Engine 4(director number associated with engine 4 is director 7 and 8), then the first port on the left most module 4 would be director 8 port E0. This is a significant departure from other Symmetrix systems and is a result of the overall increased port count in the Symmetrix V-Max array.
VMAX Engine Configuration with Storage Bays:
Now lets have a look at how the 
VMAX engine configures along with storage bay.I am giving pictorial representation, from one VMAX engine to 8 VMAX engine configuration along with storage bays.This is the standard EMC recommended configuration layout.

1. One Vmax engine with storage bay: 
















The Symmetrix V-Max array requires at least one VMAX  Engine in the System Bay. As shown, the first engine in the System Bay will always be Engine 4 as counted starting at 1 from the bottom of the System Bay. In this example,Engine 4 has two half populated Storage Bays. One bay is directly attached and the second is a daisy chain attached Storage Bay. This allows for a total of 240 drives. To populate the upper half of these Storage Bays with drives you will need to add another VMAX  Engine.

2.Two 
VMAX engine with storage bay: 














In this example, the system has been expanded to include Engine 5. This allows the top half of both Storage Bays to be populated with drives. This represents the correct order for adding V-Max Engines to the System Bay. VMAX  Engines are added from the middle, starting with 4, then 5, then 3.

3.Three 
VMAX  engine with storage bay: 













Again, working from the middle out the system has been expanded. The next VMAX  Engine is 3, allowing the attachment of two additional Storage Bays. This allows for a total of 720 drives.

4.Four 
VMAX engine with storage bay: 














5.Five VMAX engine with storage bay: 






6.Six VMAX engine with storage bay: 






7.Seven VMAX engine with storage bay: 




8.Eight VMAX engine with storage bay: (Fully populated)







Now that we have the general idea, let’s take a look at how a system gets fully populated. Still working from the inside,out alternating above and below Engine 4, each engine is added until the System Bay is fully populated with 8 VMAX  Engines. As more engines are added the corresponding Storage Bays are added. In this example, the color coding indicates the relationship between the engines and their associated Storage Bays. Fully populated, this configuration allows for a total of 2,400. You will notice that Engines 1, 2, 7, and 8 each manage two daisy chain attached Storage Bays. This represents a supported system implementation, not a design limitation.

Also read: Latest specification hardware specification-sheet

Happy Learning!