Wednesday, October 14, 2015

The StashCache Tester

StashCache is is a framework to distribute user data from an origin site to an job.  It uses a layer of caching, as well as the high performance XRootD service in order to distribute the data.  It can source data from multiple machines, such as the OSG Stash.
StashCache Architecture (credit: Brian Bockelman AHM2015 Talk)
In order to visualize the status of StashCache, I developed the StashCache-Tester.  The tester runs every night and collects data from submitting test jobs to multiple sites.
The website visualizes the data received from these tests.  It shows three visualizations:
  1. In the top left is a table of the average download speed across multiple test jobs for each site.  They are colored with green being the best, and red the worst.  Also, if the test has not be able to run at a particular site for several days, it will show the last successful test, but fade the color accordingly as the test gets older.  An all-white background means that the test hasn't been conducted for three or more days.
  2. In the top right is a bar graph comparing the average transfer rates for multiple sites.  This method better visualizes the sites.
  3. On the bottom, we have a historical graph showing the last month of recorded data.  You can see that some sites have large peaks of download speeds.  Additionally, some sites are very infrequently tested, such as Nebraska (which is the CMS Tier-2).  Infrequent testing can be caused by an overloaded site that is unable to run the test jobs.
In the future, I want to add graphs comparing the performance of individual caches in addition to the existing site comparisons.  Further, I would like to add many more sites to be tested.

Friday, August 7, 2015

GPUs and adding new resources types to the HTCondor-CE

In the past, it has been difficult to add new resource types to the OSG CE, whether it was a Globus GRAM CE or a HTCondor-CE.  But, it has just gotten a little bit easier.  Today I added HCC's GPU resources to the OSG with this new method.

With a (yet unapproved) pull request, the HTCondor-CE is able to add new resource types by modifying only 2 files, the routes table and scheduler attributes customization script.  Previously, it required editing a third python script which had very tricky syntax (python, which spit out ClassAds...).  In the following examples, I will demonstrate how to use this new feature with GPUs.

The Routes

Each job submitted to a HTCondor-CE must follow a route from the original job, to the final job submitted to the local batch system.  The HTCondor JobRouter is in charge of translating the original job to the final job, according to rules specified in the router configuration.  Crane's GPU route is:

The route submit the job to the local PBS (actually Slurm) scheduler to the grid_gpu partition.  Further, it adds a special new attribute:
default_remote_cerequirements = "RequestGpus == 1"
This attribute is used in the next section, the local submit attributes script.


Local Submit Attributes Script

The local submit attributes script translates the remote_cerequirements to the actual scheduler language used at the site.  For Crane's GPU configuration, the snippet added for GPUs is:

This snippet checks for the existence of the RequestGpus attribute from the environment, and if detected, will insert several lines into the submit script.  It will first add the SLURM line to request a GPU, then it will source the module setup script and load the cuda module.

Next Steps

The next steps for using GPUs on the OSG is to use one of the many frontends that are capable of submitting glideins to the GPU resources at HCC.  Currently, the HCC, OSG, OSGConnect, and GLOW frontends are capable of submitting to the GPU resources.

Tuesday, July 28, 2015

The more things change, the more they stay the same

A lot has happened since I last posted in January.

  1. I have successfully defended and submitted my dissertation: Enabling Distributed Scientific Computing on the Campus.  I will formally graduate on August 15.
  2. I have been offered, and accepted, a position with the University of Nebraska - Lincoln Holland Computing Center.  I will be working with the Open Science Grid's software & investigations team.
  3. On a personal note, I am now engaged.
  4. And I am moving later this year to the HCC Omaha office.
Now that I have graduated, I hope to write more blog posts about what I am doing, as well as what is happening in the OSG teams that I am working with.

Sunday, January 25, 2015

HTCondor CacheD: Caching for HTC - Part 2

In the previous post, I discussed why we decided to make the HTCondor CacheD.  This time, we will discuss the operation and design of the CacheD, as well as show an example utilizing a BLAST database.

It is important to note that the CacheD is still very much "dissertation-ware."  It functions enough to demonstrate the improvements, but not enough to be put into production.

The Cache

The fundamental unit that the CacheD works with is an immutable set of files in a cache.  A user creates and uploads files into the cache.  Once the upload is complete, the cache is committed and may not be altered at any time.  The cache has a set of metadata associated with it as well, stored as classads in a durable storage database (using the same techniques as the SchedD job queue).

The cache has a 'lease', or an expiration date.  This lease is a given amount of time that the cache is guaranteed to be available from a particular CacheD.  When creating the cache, the user provides a requested cache lifetime.  The CacheD can either accept or reject the requested cache lifetime.  Once the cache's lifetime expires, it can be deleted by the CacheD and is no longer guaranteed to be available.  The user may request to extend the lifetime of a cache after it has already been committed, which the CacheD may or may not accept.

The cache also has properties similar to a job.  For example, the cache can have it's own set of requirements for which nodes it can be replicated to.  By default, a cache is initialized with the requirement that a CacheD has enough disk space to hold the cache.  Analogous to the HTCondor matching with jobs, the cache can have requirements, and the CacheD can have requirements.  A CacheD requirements may be that the node has enough disk space to hold the matched cache.  This two way matching guarantees that any local policies are enforced.

The requirements attribute is especially useful when the user aligns the cache's requirements with the jobs that require the data.  For example, if the user knows that their processing requires nodes with 8GB of ram available, then there is no point is replicating the cache to a node with less than 8GB of ram.

The CacheD

The CacheD is the daemon that manages caches on the local node.  Each CacheD is considered a peer to all other CacheD's, there is no further coordination daemon.  Each cache serves multiple functions:
  1. Respond to user requests to create, query, update, and delete caches.
  2. Send replication requests to CacheD's that match each cache's requirements.
  3. Respond to replication requests from other CacheD's.  Matching is done on the cache before transferring the data
The CacheD keeps a database storing the metadata for each cache.  The database is stored using the same techniques as the SchedD uses for jobs to maintain a durable database store.  It also maintains a directory containing all of the caches stored on the node.

The CacheD's user interface is primarily through python bindings, at least for the time being.

CacheD Usage

The CacheD is used in conjunction with glideins.  The CacheD is started along with other glidein daemons such as the HTCondor StartD.
Initialization of cache as well as initial replication requests
The user initializes the cache by creating and uploading it to the user's submit machine.  The CacheD connects to remote CacheD's, sending replication requests.

The BitTorrent communication between nodes after accepting the replication request
Once the CacheD's accept the replication request(s), BitTorrent protocol allows for communication between all nodes inside the cluster, as well with the user's submit machine.  This graph only shows a single cluster, but this could be replicated to many clusters as well.

In Action 

Partial graph showing data transfers.  Due to overflowing the event queue, not all downloads are captured.
The above graph shows the data transfer using the BitTorrent protocol between the nodes that have accepted the replication request and the Cache Origin, which is an external node.  In this example, only 5 remote CacheD's where started on the cluster.  Because of all of the traffic between nodes, this level of detail graph becomes unreadable very quickly when increasing the number of remote CacheD's.

You will notice that the Cache Origin only transfers to 2 nodes inside the cluster.  The BitTorrent protocol is complicated and difficult to predict, therefore this could be caused by many factors.  For example, the two nodes could have found the CacheD origin first, therefore being the first nodes to download it.  The other nodes would then have found the internal cluster nodes with portions of the cache, and begun to download from it.

It is important to note that even though the ~15GB cache is transferred to all 5 nodes, totalling 75GB of transferred cache, only ~15Gb is transferred from the cache origin, and all of the rest of the transfers are between nodes in the cluster.

Up Next

In Part 3 of the series, I will look at timings of the transfers using data analysis of trial runs.  As a hint, the BitTorrent protocol is slower than direct transfers for 1 to 1 transfers.  But it really shines when increasing the number of downloaders and seeders.

Thursday, January 22, 2015

Condor CacheD: Caching for HTC - Part 1

A typical job flow in the Open Science Grid is as follows:
  1. Stage input files to worker node.
  2. Start processing...
  3. Stage output files back to submit host.
In an ideal world, step 2 would take the longest.  But, as data sizes increase, so to does the stage-in and stage-out of the files (primarily the stage-in).  Additionally, we continue to recommend the same maximum job length to users, around 8 hours.

Lets use a real world example.  The nr blast database (Non-redundant GenBank CDS translations + PDB + SwissProt + PIR + PRF, excluding those in env_nr) (source) is currently ~15GB.  When running blast, each query needs to run over the entire nr database, therefore it is required on every worker node that will run the query.

It is easy to say... well a 1 Gbps connection can transfer 15GB in ~120 seconds.  Two minutes doesn't seem unreasonable to stage-in data, especially if the job can be 8 hours long.  But usually you have many queries, so you will want to run these queries across many jobs.  So lets say you submit 10 jobs, each needing the 15GB.  Well, that should mean that if they all transfer at the same time, it will take 20 minutes of stage in time before any processing begins.  But that is only for 10 jobs, what if you are submitting 1,000 jobs, or 10,000 jobs?  At 1,000 jobs, it takes 33 hours to transfer data for input files?  Suddenly 2 minutes to transfer a database becomes hours.  And your submit machine is doing nothing but transferring input files!  Also, using some math (or a simple simulation), you can see that if the jobs start sequentially, you are limited on the number of jobs that can run simultaneously.

This increasing data size and static maximum job length have lead to compromises and innovations on the part of users and sites.

Innovations

Some users and sites have attempted to solve the larger input files.  They can be typically broken into two categories:
  1. Bandwidth increases from the storage nodes.
  2. Caching near the execution host.

Lots O' Bandwidth

OSG Connect's Stash attempts to ease the input file stage problem by providing a storage service with lots of bandwidth (10 Gbps), and support for site caching through HTTP.  Certainly the higher bandwidth solution is the brute force method of decreasing the transfer time for files.  But, 10 Gbps only knocks off a factor of 10 from all of the above times.  This will certainly decrease the transfer time, but only if you can use all 10 Gbps.

Numerous sites and users have tried to use high bandwidth storage services to solve the stage-in problem.  Nebraska (and many other sites) even have their storage services connected to 100 Gbps network connections.  But they tend to be limited not by the bandwidth available to the storage device, but by the bandwidth bottleneck at the boundary of each cluster to the outside world, usually a NAT.

Caching

With Stash's HTTP interface, users can use local site caching.  When the site caching was designed, it was meant to be used for calibration data for detectors.  This tends to be rather small data that is frequently accessed, the perfect use of a HTTP cache.  But what about when you want to transfer 15GBs of files through the HTTP cache?  A typical site may only have a few cache servers, therefore limiting the bandwidth available to download not on the available bandwidth of the hosting server, but the available bandwidth on the cache servers.  I don't know of many sites that are putting 10Gbps connections on their caching servers.

Additionally, site caching simply runs from the submit host to the remote caching host.  In the last week, the 95% of OSG VO's CPU hours have been provided by ~25 unique sites (source, calculations).  This analogous to increasing the bandwidth of the submit host by 25 times (assuming 1gbps connections standard).  This is a very cheap way to increase the bandwidth available to transfer  input files.  But the VO's usage is not split evenly amongst all 25 sites.  6 sites account for ~50% of the OSG VO usage.  On those 6 clusters, the transfer bandwidth is limited to what those 6 proxy servers can transfer to their cluster's nodes.  Therefore, for 50% of your processing slots, you are only increasing the transfer speed by 6.

The HTTP protocol and HTTP caching is not designed for such large files.  HTTP will always be designed for the dominant users, browsers downloading relatively small webpages.  Software designed to use HTTP are optimized for web sites, lots of small files.  Therefore, any software that might be used in conjunction with HTTP may not be ideal for large files.

A New Hope

Part of my PhD has been to develop a new way to handle large stage-in datasets.  In the above problem statement, two areas where most often used to optimize transfer times, bandwidth and caching.  My attempts to optimize both of these approaches using a daemon named the HTCondor CacheD.

Bandwidth

As noted above, HTTP is great for it's designed use: websites with lots of small files.  But it is not designed or well optimized for larger files.  Further, the bandwidth from the execution host to the storage servers are typically bottlenecked at the cluster boundary.   Therefore another protocol was chosen that could better handle large files, BitTorrent.

BitTorrent was chosen since it contains many characteristics that make it ideal for transferring large files.  In order to bypass the network bottlenecks in the remote clusters, BitTorrent allows for clients to transfer data between them while also downloading from the original source.  This allows every worker node to become a cache for the rest of the cluster nodes.  As we will see in the next post, BitTorrent works well because the vast majority of the traffic is between nodes inside the cluster, rather than to nodes outside the cluster.

Caching

The caching described above utilized a single cache on each site.  From the usage breakdown, you can see that VO's that rely on few sites will find this a bottleneck.  For campus users, which may only use 1 or 2 clusters, this can be as bad of a bottleneck as not using a cache at all.  Also, this requires the site to have a caching server setup, which requires administrator cooperation.

The CacheD instead uses local caches on each worker node.  This local cache allows very fast transfers when the jobs begin.  Further, the local cache acts as a seeder for the BitTorrent transfers described above.

When a job is begins running on a node, the stage-in stage will request from the local cache a copy of the data files.  If the data files are not already staged at the local node, the local cache will pull the data files using BitTorrent from the submit host (cache origin) and all other nodes in the cluster.  When the local cache has cached the stage-in data files, it will transfer the cached files into the jobs sandbox and begin processing.  Subsequent jobs that require the same stage-in data files will request then immediately receive the files since they are already cached locally.

Up Next

In the next post, I will describe the design of the CacheD in more detail.  Further, I will show usage of the CacheD with blast databases, and the improvement in job startup time resulting in optimized stage-in transfers.

UPDATE: Link to Part 2, Architecture of the CacheD.