From Chameleon, with love

We love our users! That’s why for the last few months we have been working on a Valentine. Not the pretty picture – rather, we’d like to lay at your feet a bouquet of new capabilities that we hope will make your research easier and more productive. These include:

– Support for custom kernel boot. You can now easily customize the operating system kernel or modify the kernel command line in your experiments! While it was possible (just barely) to boot from a custom kernel before, it was also inefficient and slow for those of you who needed to do it repeatedly: each time, a user had to upload a new image with the customized kernel to the repository and then boot from that image; both time-consuming operations. You now have the option of modifying the boot loader configuration to point it to a new kernel on the local disk, or specifying kernel parameters and then rebooting using this modified configuration. This will make updates to the kernel take effect quickly and means that experiments requiring kernel development will be much easier to run. If you are not working with kernels, you will see no changes in how images are deployed – however, we have had to change image format of bare-metal images (from partition images to whole disk images) as well as image snapshotting instructions. To minimize the impact on you, we have converted all of the images supported by Chameleon and as many of the user images as we could; in order to convert others we may need your assistance on Monday. Note that your existing images have been renamed with a “.partition” suffix. If you would like to find out more about the feature, please read our bare metal user guide. We would like to thank our users Swann Perarnau of Argonne National Laboratory and Yuyu Zhou of the University of Pittsburgh for helping us define the details of this capability and working with us to test and validate them!

– Upgrade to Liberty. The OpenStack deployment which is a core component of CHI (Chameleon Infrastructure) has been upgraded to the most recent Liberty release. This upgrade not only facilitated the development of the feature above, but also added multiple usability features, such as the ability to edit compute image metadata from the project dashboard or the ability to deactivate an image, improved error messaging, improved performance of many features, and added administrative features that will allow us to make Chameleon operate more smoothly.

– Appliance Marketplace. The appliance marketplace allows you to discover, publish, and share appliances – bare metal or virtual machine images capturing the experimental environment – a key element of reproducibility. The initial version of the Appliance Marketplace announced in our New Year’s message was simply a static table of initial appliances provided and supported by the Chameleon team. The new Appliance Marketplace allows you to publish and share your own appliances so that others can discover them and easily play with the software and tools you developed. They can then use it either to reproduce the research of others or as a starting point for their own research and experimentation. To find out more, read the appliance documentation on our FAQ. We would like to thank Rosa M. Badia and her team of the Barcelona Supercomputing Center for motivating us to provide this capability sooner rather than later and for contributing the first external appliance representing COMPSs, a task based programming model for distributed platforms!

– Identity-based federation with GENI. For some time now, GENI users have been able to access Chameleon by using their GENI credentials and charging to the GENI Federation Project. Now Chameleon users will be able to use their Chameleon credentials to access GENI without having to create a GENI account! This will enable multiple additional advanced research projects. We would like to thank Tom Mitchell and Marshall Brinn of BBN on the GENI team for working with us to make this possible!

As most of you already know, these upgrades have already been applied at the University of Chicago site, and will be applied at TACC at the scheduled downtime next week.

We hope that these new features will reduce the time you spend wrangling logistics and put it back where it belongs: into your research. Thank you to all for sharing your requirements with us and inspiring us to make Chameleon a better testbed!

If you find Chameleon useful for your research, send us a Valentine at! Here’s what we would like: a story of how you are using Chameleon for your research. Those stories are motivating not only to us in that they make us build a better testbed – but also to others who may get good ideas from your work on how to structure their own experiments. To encourage this sharing of experiences, we plan to highlight interesting research projects done by our users on the Chameleon portal as we have recently done with the story on cybersecurity research. In short, we’d like to challenge you to be an inspiration to other: if you have interesting stories to share, email them to!

Happy Valentine’s Day!

Kate Keahey
Mathematics and CS Division, Argonne National Laboratory
Computation Institute, University of Chicago

US Ignite Tutorial on CloudLab.US

I’m reviewing the US Ignite Tutorial on CloudLab.US, featuring an OpenStack Juno on Ubuntu 14.10 instance with a controller, network manager, and one compute node. This profile runs on either x86 or ARM64 nodes. It takes advantage of Vanilla Apache Hadoop, Hortonworks Data Platform, Apache Spark, etc.

CloudLab is a leading-edge laboratory for exploring and applying new computer cloud architectures at scale. The CloudLab infrastructure consists of three new clusters at U. Wisconsin, Clemson U., and U. Utah augmenting the existing Emulab and GENI (Global Environment for Network Innovations) distributed computing facilities. Each of the new clusters is aimed at providing hardware support for a different point in the cloud design space, and together they represent an extraordinary flexibility for computer scientists to try new ideas and for domain scientists to match CloudLab infrastructure to their applications.

CloudLab is a project of the University of Utah, Clemson University, the University of Wisconsin Madison, the University of Massachusetts Amherst, Raytheon BBN Technologies, and US Ignite. CloudLab is part of the National Science Foundation’s NSFCloud program. To design and build the CloudLab facility, we’re partnering with three vendors: Cisco, Dell, and HP. Seagate has also provided a generous donation of hard drives.

Another similar project is Chameleon. Some videos here.

UPDATE: see also this additional recently announced project: Cornell to Lead NSF-Funded Cloud Federation for Big Data Analysis By David Raths 11/04/15

A: Because You Don’t ROWE Q: Why Can’t Tennessee Innovate? [Update]

Why Nashville Companies Are Targeting Tweens For High-Tech Jobs BY ALISSA WALKER | 07-09-2012

See here for news on ROWE in Nashville. Nicholas Holland demonstrates with his ROWE notes.

My older ROWE related posts here.

# # # #

Mar 13, 2012

What good do personal clouds and corporate data hives, aquihires and crowdsourcing do to meet your needs (as HR continues to stumble around trying to hire long-term individuals for short-term projects, meanwhile preparing for the year-end mass layoffs which inexorably ensue) if your managers cannot get past their love affair of physical MBWA when your employees are enculturated to do their best work in virtual innovation clusters and collaboratories (see article comments) which take place in a SecondLife CoLab or some such? What good does it do to build a city-wide innovation grid  infrastructure or a country-wide innovation cyber space if you still expect your employees to waste an hour of their day driving to and from a cube which holds a desktop computer when they have a speedier, more robust laptop at home? 1) Learn about Results Only Work Environments. 2) Invest in them. 3) Use them.

Analyzing and Improving Collaborative eScience with Social Networks (eSoN 12)

Workshop to be held with IEEE e-Science 2012

Monday, 8 October 2012, Chicago, IL, USA


Social networking is profoundly changing the way that people communicate and interact on a daily basis. As eScience is inherently collaborative, social networks can serve as a vital means for supporting information and resource sharing, aiding discovery of connected individuals, improving communication between globally dispersed individuals, and even measuring scientific impact. Consequently, eScience systems are increasingly integrating social networking concepts to improve collaboration. For example researcher profiles and groups exist in publication networks, such as Google scholar and Mendeley, and eScience infrastructures, such as MyExperiment, NanoHUB and GlobusOnline all utilize social networking principles to enhance scientific collaboration. In addition to incorporating explicit social networks, eScience infrastructures can also leverage implicit social networks extracted from relationships expressed in collaborative activities (e.g. publication and grant authorship or citation networks).

This workshop aims to bring together researchers from a diverse range of areas to establish a new community focused on the application of social networking to analyze and improve scientific collaboration. There are two complementary areas of focus for this workshop 1) how to efficiently share infrastructure and software resources, such as data and tools through social networks, and 2) how to analyze and enhance collaboration in eScience through both implicit and explicit social networks, for example analyzing scientific impact through citation networks or improving collaboration by associating data and tools with networks of publications and researchers.

This workshop represents the amalgamation of two complementary workshops held in 2011: Social Networks for CCGrids (SN4CCGrids) held at CCGrid 2011 and Measuring the Impact of eScience Research (MeSR) held at eScience 2011.

Scope of workshop

The topics of interest are, but not limited to, the use of social networks to analyze and improve collaborative eScience:

  • The use of social networks and social networking concepts in eScience and eResearch
  • Social network applications used for eScience
  • Social network based resource sharing and collaboration architectures
  • New forms of collaborative computing and resource sharing
  • Crowdsourcing of scientific applications using social media
  • Social Cloud computing
  • Novel applications of digital relationships and trust
  • Definition of novel principals, models and methodologies for harnessing digital relationships
  • Extraction of implicit social networks from scientific activities (publication, citation and grants)
  • Analysis of collaborative scientific activity through social networks

Submission instructions

Authors are invited to submit papers containing unpublished, original work (not under review elsewhere)
of up to 8 pages of double column text using single spaced 10 point size on 8.5 x 11 inch pages,
as per IEEE 8.5 x 11 manuscript guidelines.

Templates are available from:

Authors should submit a PDF file that will print on a PostScript printer. Papers conforming to the above guidelines can be submitted through the workshop’s paper submission system:

At least one author of each accepted submission must attend the workshop and all workshop participants must pay the eScience 2012 registration fee. All accepted papers will be published by the IEEE in the same volume as the main conference. All papers will be reviewed by an International Programme Committee (with a minimum of 3 reviews per paper). Papers submissions should be performed using the easychair system, by the date mentioned below.

Important dates

  • Paper Submissions Due: July 27, 2012
  • Notification of Acceptance: August 27, 2012
  • Camera Ready Versions Due: September 17, 2012
  • Workshop: October 8, 2012


  • Kyle Chard, University of Chicago & Argonne National Laboratory, USA
  • Tanu Malik, University of Chicago & Argonne National Laboratory, USA
  • Simon Caton, Karlsruhe Institute of Technology, Karlsruhe, Germany
  • Wei Tan, IBM T.J. Watson Lab, USA

Steering Committee

  • Christine Borgman, University of California, Los Angeles, USA
  • Ian Foster, University of Chicago & Argonne National Lab, USA
  • Gerhard Klimeck, Purdue University, USA
  • Omer Rana, Cardiff University, UK

Programme Committee

  • Kris Bubendorfer, Victoria University of Wellington, New Zealand
  • Junwei Cao, Tsinghua University, China
  • Justin Cappos, Polytechnic Institute of New York, USA
  • Jinjun Chen, University of Technology Sydney, Australia
  • Walter Colombo, Cardiff University, UK
  • Mike Conlon, University of Florida, USA
  • Roberta Cuel, University of Trento, Italy
  • Roberto M Cesar jr, University of Sao Paulo, Brazil
  • Jennifer Golbeck, University of Maryland, USA
  • Peter Komisarczuk, Thames Valley University, UK
  • Nicolas Kourtellis, University of South Florida
  • Paolo Missier, Newcastle University, UK
  • Ioan Raicu, Illinois Institute of Technology, USA
  • Jianwu Wang, San Diego Supercomputer Center, USA
  • Christof Weinhardt, Karlsruhe Institute of Technology, Germany
  • Wenjun Wu, Beihang University, China
  • Lynn Zetner, Purdue University, USA
  • Hui Zhang, Bejing University, China
  • Jia Zhang, Northern Illinois University, USA

Apparently Cybersecurity is important to the US economy, the Congress learns

H. R. 2096 To advance cybersecurity research, development, and technical standards,
and for other purposes.

Also, planning related to a national strategy concerning “HIGH PERFORMANCE COMPUTING” and “NETWORKING AND INFORMATION TECHNOLOGY”, and stuff like IT, Big Data, etc. is worth investing in at a national level.

H. R. 3834 To amend the High-Performance Computing Act of 1991 to authorize activities for support of networking and information technology research, and for other purposes.

Extreme Scaling Workshop 2012 Call For Presentations

Submission and Formatting Guidelines

Submissions for the Extreme Scaling Workshop 2012 are encouraged from scientists, engineers, and high-performance technologists from colleges, universities, laboratories, industry, HPC centers, and other organizations conducting related work. Each presentation will be 30 minutes in length, followed by group discussion.

The presentations and discussion are intended to assist the computational science and engineering community in making effective use of petascale through extreme-scale systems across the spectrum from local campus-scale to national systems.

Forward submissions, including title, abstract, and names and institutions of authors/presenters to Scott Lathrop by April 15, 2012. Notice of acceptances will be issued by April 23.

The workshop committee seeks submissions of excellent quality addressing the following challenges:

  • Scaling applications to large-core counts on general-purpose CPU nodes
  • Effectively using the accelerators and GPUs
  • Using both general-purpose CPU and accelerated GPU nodes in a single and coordinated simulation
  • Enhancing application flexibility for increased effective use of systems

Submissions may address one or more of the following components:

  • Application and Algorithm Functionality and Performance
  • Application and Algorithm Efficiency and Scaling to large-processor counts in the face of limited bandwidth (interconnect, memory, etc.) and other architectural constraints
  • Effective use of GPU Accelerator/Highly Parallel Heterogeneous Computational Units
  • Application and Algorithm Flexibility
  • Using Heterogeneous Systems that have both general-purpose CPU and accelerated GPU units in single applications
  • Application-based Fault Tolerant Methods and Algorithms to increase the effective use of resources
  • Application-based Topology Awareness to more effectively utilize limited resources

Accepted presentations and a summary of workshop discussions will be included in a workshop report. Abstracts should be no more than a page in length and provide sufficient detail for the workshop committee to make informed judgments about the work. Abstracts should include the names and affiliations of all co-authors and indicate which author will be presenting.

Key Dates
April 15: Paper or abstract submissions due.
April 23: Acceptances issued.
April 30: Workshop registration opens.
July 15-16: Workshop conducted.
July 16-19: XSEDE12 conference in downtown Chicago — Consider attending both events!

Registration, Travel and Accommodations
A registration fee of $125 covers the cost of food and beverages provided at the workshop: lunch and dinner on Sunday, breakfast and lunch on Monday, and breaks each day. NOTE: Registration will open in mid-April. A block of rooms are being reserved in a hotel near Chicago’s O’Hare airport, with a free shuttle service between the airport and hotel. The hotel rate is expected to be approximately $120/night. Details will be announced within the next two weeks.

Participants are responsible for the cost of their own travel and accommodations and making their own travel arrangements.

For More Information
If you have questions, please contact Scott Lathrop at

We look forward to seeing you at the Extreme Scaling Workshop 2012 and hope that you come for the workshop and stay for the XSEDE12 conference, which runs from July 16-19.

XSEDE12 website:

Thomas Erl, 5th International SOA, Cloud + Service Technology Symposium

As the program committee chair, I am pleased to finally announce the 2012 edition of the Symposium, officially titled the “5th International SOA, Cloud + Service Technology Symposium”. This year’s conference is being held in downtown London on September 24-25.

With the change in the conference title comes a new official Website ( ) and a new official LinkedIn Group (the “International SOA, Cloud + Service Technology Symposium” LinkedIn Group ). Call for Presentation ends July 15, 2012.

I have removed my personal LinkedIn account from the old LinkedIn Group because all further communication and notifications regarding the next Symposium will be posted on the new LinkedIn Group.

If you have any questions regarding this year’s event or if you are interested in participating, feel free to contact me directly at You can contact the event organizers at with any general inquiries.

FOSE 2012 Washington, DC on April 3-5, 2012

The FOSE Conference & Expo is returning to Washington, DC, April 3-5 – with all new in-depth conferences, workshops, camps, keynotes and education.

The magic number for FOSE in 2012 is 5! The FOSE 2012 Education Experience features 5 all-new conferences:

FOSE also has a FREE Expo, including 5 keynotes and 5 Marketplaces with 5 educational theaters. There will also be free workshops and training, plus you can participate in 2 Camps – CloudCamp, and BigDataCamp!

Interested in attending FOSE? Register today at

Condos and Clouds

This is arguably one of the most useful presentations I’ve seen and can recommend. Note: Rated MA for language

Condos and Clouds – A LinkedIn Tech Talk by Pat Helland, Distributed Systems and Databases Architect [this description from the LinkedIn event page]

Over the last 100+ years, the way people design, build, and use buildings has evolved. It is now normal to construct a building without knowing in advance who will occupy the building. In addition, we increasingly have shared occupancy of our homes (apartments and condos), retail, and office space. To accomplish this change, the way we use the buildings has evolved. There is a new trust relationship, customs, and laws that establish the relationship between the occupants and the building managers. Recently, our industry has been moving to implement Cloud Computing. This has been very successful in some applications and very challenging in others. This talk posits that many of the challenges we’ve seen in cloud computing can be understood by looking at what has happened in buildings and their occupancy. Standardization, usage patterns, legal establishment of rights and responsibilities are all nascent in the area of cloud computing. We examine a very common pattern in the implementation of “software and a service” and propose ways in which this pattern may be better supported in a multi-tenant fashion.

Biography: Pat Helland has been working in distributed systems, transaction processing, databases, and similar areas since 1978. For most of the 1980s, he was the chief architect of Tandem Computers’ TMF (Transaction Monitoring Facility), which provided distributed transactions for the NonStop System. With the exception of a two-year stint at Amazon, Helland has worked at Microsoft Corporation since 1994 where he was the architect for Microsoft Transaction Server and SQL Service Broker. Until September, 2011, he was working on Cosmos, a distributed computation and storage system that provides back-end support for Bing. Pat recently relocated to San Francisco with his wife to be close to the grandchildren and to explore new opportunities in “Big Data” and/or “Cloud Computing”.