Friday, June 29, 2012

CMS with the Campus Factory

The Campus Factory is usually used by small research groups to expand their available resources to those on the campus.  Of course, that's not always easy for larger VO's, who tend to have more complicated software setups.  This is where the combination of Parrot and CernVM-FS comes in.

CernVM-FS is a HTTP based file system that serves many CERN based VOs software repositories.  In our case, we used a CernVM-FS server hosted at the University of Wisconsin - Madison (Docs)

Parrot is a program that will capture reads and writes from arbitrary executables and redirect them to remote resources.  For our use, we will redirect reads from the local file system to reads from the CernVM-FS server at UW.

Our T3, as usual, is over subscribed.  Sending our T3 jobs out onto the grid, much like overflowing Tier 2 jobs, would significantly decrease the time to completion for our CMS users.  But, our campus grid does not have CMS software available everywhere, therefore we must export the software to the jobs.  For this, we use Parrot and CernVM-FS.

Pilot submission of BOSCO

The BOSCO system is depicted in the above graphic.  First the user submit their job to their local Condor.  This instance of Condor could be tied to local resources that also can run their jobs, but for this picture, we only show the BOSCO resources.  The Factory periodically queries the user's Condor, and submits Pilot jobs to run the user's jobs.  Once the pilots start on the remote system, they begin executing the users' jobs.  The user does not have to specify any special requirements, nor use any special commands for this system to work.

We used BOSCO to flock jobs from our T3 to our other campus resources.  This process required no user interaction.  Matter of fact, the user had no idea that her jobs where not running on the T3.  This transparent interaction with the user is the primary goal of the Campus Factory design, and was clear in this experiment.
Tier-3 Connection to the UNL Campus Grid

We hope to make this a production service in the future.  In the meantime, this is being used as a prototype for what other Campuses can do with BOSCO.

Acknowledgments: Dan Bradley and the ccTools team for the CernVM-FS integration with parrot.  The AAA project for the file infrastructure to enable transparent data access.  And Helena Malbouisson for allowing me to play with her jobs, sending them to other resources.
Modifications to campus factory configs can be found on github.

Thursday, June 28, 2012

Day 4: Open Science Grid Summer School 2012

Yesterday, I taught the class storage on the OSG.  Since I was teaching, I was unable to write or take any pictures.

Today focuses on actual science on the OSG.  First, how actual science runs on the OSG with Greg Thain.
Greg teaching rules of thumb on the OSG

This afternoon was focused on success stories of running on the OSG.  For example, Edgar Spalding talk on his botany workflow that run successfully not only at Wisconsin, but ran on the OSG as well.

Edgar talking about plant genetics
Today is the last day of the summer school, and we are all feeling very exhausted.  It was a very successful summer school.  Many students learned how their science can be done with HTC.  I am starting to see students think in terms of HTC, such as how they can split their jobs into manageable sizes, what input data would be required?  What output?

Tonight is the final dinner, and then we are done.  I will be driving back to Fermilab, then I will be back in Lincoln next week.

Again, pictures from the summer school can be found here.

Tuesday, June 26, 2012

Video of Igor's Exercise

Igor had a very interesting exercise during the OSG Summer School.

It's a little difficult to explain, but:
Each student at the tables is a worker node.  The people walking around are the 'network', sending jobs to the scheduler which is near the podium.  The students in the stands are 'users'.

The worker nodes can report wrong results, and also mis-represent themselves.  This is a security exercise.

Day 2 of OSG Summer School

We had a great day yesterday at the OSG Summer School.  Not only was the weather great, but the exercises went very well (a testament to Alain).

Monday Evening Work Section
The evening work section was great as well.  We where able to debug some problems that we didn't have time to work on during the day.  Also, we where able to answer questions about the OSG in a much more informal setting.

Today is Igor's day, dealing with Glidein.  So far, the exercises have worked very well.  Blast has been an excellent example for the users.

Igor Presenting Tuesday Morning
Students exercises Tuesday morning

On a side note, parrot is finally working with blast on glidein.  So the Remote I/O talk tomorrow is a GO.

Again, pictures are on a public album on my Google Plus.

Monday, June 25, 2012

Summer School Pictures

I'm putting pictures from the summer school on my Google Plus.

Day 1 of OSG Summer School

I was asked to teach at the OSG Summer School.  I think teaching the next generation of OSG users is a great opportunity.  This is how I learned about the OSG, at the International Summer School for Grid Computing (ISSGC).
Anwar and I learning grid computing in Nice, France

This week we are at the OSG Summer School in warm Madison (not quite the same as Nice).

Alain working the room
Students hard at work on Alain's Exercises
More updates as they happen.

Saturday, June 2, 2012

Installing and Configuring glusterfs on EL6

I'm always interested in the newest technologies.  With the purchase of Gluster by RedHat, I figured it was time to give it a try.  We are always looking for new technology that can lower the operations of our sysadmins, maybe Gluster is that option.

This guide is heavily based on the administrator's guide for Gluster.


All of the gluster packages are in EPEL, so first we need to install that repo on our nodes.
$ rpm -Uvh

Then install the glusterfs server:
$ yum install glusterfs-server -y

Then start the server:
$ /etc/init.d/glusterd start

For demo purposes only, flush the firewall:
$ iptables -F


And now add the nodes to the gluster system:
$ gluster peer probe i-0000011a
Probe successful
$ gluster peer probe i-0000011c
Probe successful

Now you can check for the nodes with the status command:

$ gluster peer status
Number of Peers: 2

Hostname: i-0000011a
Uuid: 5bdc4f02-4e08-4794-af03-fd624be2d2e0
State: Peer in Cluster (Connected)

Hostname: i-0000011c
Uuid: 248be1ba-c5aa-40d1-90e9-ca95a7e31697
State: Peer in Cluster (Connected)

In this demo, I decided to make a Distributed Replicated volume.  There are many options, but this seemed the best I could see.

To create the volume:
$ gluster volume create test-volume replica 3 transport tcp i-00000119:/exp1 i-0000011a:/exp2 i-0000011c:/exp3

Note, I didn't make the directories /expX on any of the nodes, they are automatically made for you.

To start the volume:
$ gluster volume start test-volume

To mount the volume, we don't have to modprobe fuse since it's built into the 2.6.32 kernel that comes with EL6.  You can also use NFS to mount gluster volumes, but I decided to use fuse.
$ mkdir -p /mnt/glusterfs
$ mount -t glusterfs i-0000011a:/test-volume /mnt/glusterfs

YAY! working glusterfs.  To confirm that it is working, I copied in a test file, mounted the test-volume on another node in the test cluster as well, and there was my file!


GlusterFS doesn't seem too advanced compared to Hadoop or Ceph.  If I look in the /expX directories I just see the whole file in there.  In the current release, I believe the closest volume configuration we could have to Hadoop or Ceph is Striped Replicated Volumes.  But, that volume type is only supported for use as a MapReduce backend.

I think GlusterFS would be really cool for a OpenStack back end.  Especially since it's so darn simple.  Easily recoverable since the files are stored in plain text.  Of course, you would probably want to do striping for the large image sizes of those files.

Overall, I feel this was the easiest of the file systems I have tried out.  Ceph was a little scary with all the configuration needed.  GlusterFS was as simple as just issueing a command to add another server.  Of course, does this mean it'll load balance the files if a server goes away?  Don't really know how that'll work.