Wednesday, June 25, 2014

GPUs on the OSG


For a while, we have heard of the need for GPU's on the Open Science Grid.  Luckily, HCC hosts numerous GPU resources that we are willing to share.  And with the release today of HTCondor 8.2, we wanted to integrate OSG GPU resources transparently to the OSG.

Submission

Submission to the GPU resources at HCC uses the HTCondor-CE.  The submission file is shown below (gist):

You may notice that it is specifying Request_GPUs to specify the number of GPUs the job requires. This is the same command used when running with native (non-grid) HTCondor.  You may submit with the line Request_GPUs = X (up to 3) to our Condor-CE node, as each GPU node has exactly 3 GPUs.

Additionally, the OSG Glidein Factories have a new entry point, CMS_T3_US_Omaha_tusker_gpu, which is available for VO Frontends to submit GPU jobs.  Email the glidein factory operators to enable GPU resources for your VO.

Job Structure

The CUDA libraries are loaded automatically into the job's environment.  Specifically, we are running CUDA libraries version 6.0.

We tested submitting a binary compiled with 5.0 to Tusker.  It required a wrapper script in order to configure the environment, and to transfer the CUDA library with the job.  Details are on the gist along with the example files.

Lessons Learned

Things to note:
  • If a job matches more than 1 HTCondor-CE route, then the router will round robin between the routes.  Therefore, it is necessary to modify all routes if you wish specific jobs to go to a specific route.
  • Grid jobs do not source /etc/profile.d/ on the worker node.  I had to manually source those files in the pbs_local_submit_attributes.sh file in order to use the module command and load the CUDA environment.

Resources

  • HTCondor-CE Job Router config in order to route GPU jobs appropriately.
  • HTCondor-CE PBS local submit file attributes file that includes the source and module commands.

Wednesday, April 9, 2014

Part 1: Features of the Chrome App - Embedding


In my previous post, I introduced my OSG Chrome App.  In this post, I want to introduce one feature of the chrome app, Embedding profiles.

Part of my morning routine is checking multiple websites for the status of the resources at Nebraska.  For example, I check our dashboard page that I've written about before.  And I check the HCC GlideinWMS status page.

The HCC GlideinWMS status page shows only the monitoring information.  In order to see any accounting data, I have to navigate to the GratiaWeb site and filter for hcc usage.  Instead I want to view the accounting data on the same page as I am viewing GlideinWMS status.  And so I added the ability to Embed a set of graphs using the Chrome App.

Creating a HCC GlideinWMS Profile

First, I need to create a glideinwms profile for hcc.  I have 2 questions that I want answered, who is running, and where are they running.  In this case, I am playing the role of a VO Manager.  I want to see usage filtered by VO, specifically my VO, HCC.  The VO Manager role gives me 2 graphs by default, total usage per VO, and where the VO is running.  But it doesn't include what users are running in my VO.  Therefore, I want to add a new graph, the Glidein (since HCC uses GlideinWMS) per user graph, which is on gratiaweb.

To add a graph, I followed the documentation to add the Glidein per user graph.  I copied and pasted the graph URL into the box, and it added the graph to my list.
HCC Usage with new graph.  Not much usage...

Embedding the Profile

Now that I have the profile showing the data I want to see, I want to embed this data in my webpage.  I followed the documentation again, for sharing the graphs and embedding it.

I clicked the green share button at the top, and entered the name and description of the profile.  Then I submitted the profile for sharing.  It returned an embed link that I can use on the webpage.

Share profile showing embed URL

Once I have this embed URL, I can copy / paste that HTML into the HCC GlideinWMS page and it will show HCC data every time I visit.

HCC Usage embedded on our own website

And that's it.

Links


Friday, April 4, 2014

OSG Usage Chrome App

Hi, I'm Derek Weitzel.  You may remember me from such side projects as:
On today's episode, I will introduce a Chrome Application designed for the OSG's accounting system.

OSG Usage Chrome Application


The OSG Usage Viewer is a packaged chrome app designed to display accounting graphs from Gratia, the OSG's accounting system.  It also features:
The app allows you to add graphs from the OSG's Gratia web interface, to manipulate the data filters, and share with others.  Full documentation of the app is available.  Download the app today!

Links



Wednesday, March 12, 2014

A HCC Dashboard with OSG Accounting

After the 2013 SuperComputing Conference, we found ourselves with a extra monitor at HCC.  Therefore, I set about creating a dashboard which can show the current status of HCC.

Creating the Dashboard

I have an interest in data visualization, and follow many blogs that show off new methods.  On one occasion, I saw Dashing mentioned.

Dashing is a dashboard framework made by Shopify for their own use, and released as open source.  It is mostly written in Ruby and CoffeeScript (a higher level javascript, if you can imagine).  It has a concept of jobs which fetch data and forward the data to the framework.  The data is sent to clients viewing the dashboard, where it is parsed by the Coffeescript and modeled with a combination of data bindings from batman.js, CSS with SCSS, and plain old HTML.

I wrote several jobs to retrieve data from numerous sources.  Most of the information is from HCC's local instance of OSG's Gratia accounting system.  The HCC Dashboard uses our gratia system for:
  • How many CPU hours where consumed on our resources.
  • Current usage by User (http://hcc.unl.edu//gratia/)
The job to retrieve the top user's also communicates with HCC's user database to retrieve college and department information.  The storage meters use an external probe on the clusters to periodically report the used storage space of our filesystems.

Each box is an instances of a widget.  A widget is a combination of HTML, SCSS, and CoffeeScript that are used to parse and present the data.

Current Dashboard Design

Most of the information on the dashboard is in the form of monitoring.  The current number of cores used on our resources and the top users widgets use Gratia monitoring information.  The networking graph uses Ganglia.

We also include a "Hourly Price on Amazon EC2" widget.  This combines the computing, storage, and networking costs (extrapolated from current values), and displays an expected price per hour on Amazon.  The computing is easily the most expensive component.

Who Uses it?

HCC uses it to display the current status of our computing center.  It is useful to see if anything is working incorrectly.  For example, we where able to spot problems on one of our clusters when the number of running cores decreased significantly, which was caused by the scheduler draining off a significant portion of the cluster in order for a single user to run a toy job.

The top users is also interesting for HCC researchers when the come into the offices.  They are able to see their own usernames on the big display, prominently displayed.


Growing collection of visualizations

More Information

The source for the dashboard is available on Github.  Also, the live instance of the dashboard is available here.

Thursday, February 27, 2014

Moving from a Globus to an HTCondor Compute Element


A few weeks ago, we moved our opportunistic clusters, Crane and Tusker, from Globus GRAM gatekeepers to the new HTCondor-CE.  We moved to the HTCondor-CE in order to solve performance issues we experienced with the GRAM when using the Slurm scheduler.

When we switched Tusker from PBS to Slurm, we knew that we would have issues with the grid software.  With PBS, Globus would use the scheduler event generator to efficiently watch for state changes in jobs, ie idle -> running, running -> completed..  But Globus does not have a scheduler event generator for Slurm, therefore it must query each job every few seconds in order to retrieve the current status.  This caused a tremendous load on the scheduler, and on the machine.

Load graph on the gatekeeper
We switched to the HTCondor-CE in order to alleviate some of this querying load.  The HTCondor-CE provides configuration options to change how often it queries for job status, and can provide system wide throttles for job status querying.

The HTCondor-CE also provides much better transparency to aid in administration.  For example, there is no single command in Globus to view the status of the jobs.  In the HTCondor-CE, there is, condor_ce_q.  This command will tell you exactly what jobs the CE is monitoring, and what it believes is their job status.  Or if you want to know which jobs are currently transferring input files, they will have the < or > symbols for incoming or outgoing, respectively, in their job state column.

The HTCondor-CE uses the same authentication and authorization methods as Globus.  You still need a certificate, and you still need to be part of a VO.  The job submission file looks a little different, instead of gt5 as your grid resource, it is condor:
Loading ....

Improvements for the future

The HTCondor-CE could be improved.  For example, each real job has 2 entries in the condor_ce_q output.  This is due to the job routing from the incoming job to the scheduler specific job.  The condor_ce_q command could be improved to show linking between the 2 jobs, similar to the dag output of the condor_q command.

The job submission file is removed after a successful or unsuccessful submission to the local batch system (Slurm).  This can make debugging very difficult if the job submission fails for any reason.  Further, the gatekeeper doesn't propagate stdout / stderr of the submission command into the logs.

Final Thoughts

The initial impressions of the HTCondor-CE have been very good.  Since installing the new CE, we have had ~100,000 production jobs run through the gatekeeper from many different users.

And now for the obligatory accounting graphs:

Usage of Tusker as reported by GlideinWMS probes.

Wall Hours by VO on Tusker since the transition to the HTCondor-CE