**This is an old revision of the document!**

Workload Patterns

A useful precursor

The Enterprise File Fabric does not provide storage directly, it provides the ability to connect various on-premises and on-cloud storage solutions, unifying them into a 'single pane of glass' access.

The storage solutions can include, but is not limited to, CIFS/ SMB, Windows Filers, NAS / SAN, Object Storage, Amazon S3, Azure, Google Storage, Dropbox, Office 365, OneDrive etc.

When the File Fabric connects to these solutions it does not replicate files, it indexes the remote file store and stored the metadata of items such as filename, size, path, date etc.

When end users connect to the File Fabric file listing and navigation is quick because the listing does not need to be pulled from the back end data store. This can have tremendous advantages when the remote listing is very large (we have seen billions of files / objects) and when latency is very high. It can also be used to provide high speed unified search for files across storage repositories.

The File Fabric is non-proprietary and therefore bi-modal which means data can be added directly to the storage solution, it does not need to be processed directly through the File Fabric. To ensure the metadata that is stored is up to date the File Fabric can, at intervals, continually index the storage or it can be setup to asynchronously pull new items for the current view the user is at, on-demand.

The File Fabric can also do deeper metadata indexes in which the content of a file is indexed providing much richer metadata and indeed this is what we use for our content discovery PII/PHI features.

Workload Planning

In general many customers use a single File Fabric deployed in a HA fashion, that maybe GEO deployed and/or use the File Fabric's SiteLink feature.

We do have some customers in the governmental, Medical / Genomics, and Research verticals that work with extremely large data sets / data lakes and have many different use cases.

The way in which the File Fabric cache data in these scenarios can be very advantageous because access and search by any other means can be very slow or in some extreme cases when Object Storage is used and prefixes are shallow, almost impossible due to timeouts.

Quite often such cases are read heavy often with files being added directly to the underlying stores and users requiring direct desktop access through the File Fabric's direct desktop drives or sync.

In parallel it is not unusual for such companies to also have document repositories whose access characteristics are different, often very write heavy. In these circumstances we recommend system administrators and architects of the File Fabric consider the tenant partitioning of the File Fabric to segment / partition metadata or consider deploying a separate File Fabric node.

The Storage Made Easy support and/or Professional Services Team is always on hand to help.