New and better methods of scientific experimentation are generating enormous volumes of data that require increasingly sophisticated methods to keep organized. Five years ago Terabytes were being stored.
Today, with advances in whole genomic sequencing, cell proteomics, and wearable devices, the volumes of data being generated and stored are in the tens to hundreds of petabytes. Access to these data is only possible by moving the computation to the data.
Gryphon understands this dynamic and has worked over the past decade to implement an appropriate computational infrastructure. Our infrastructure enables researchers to authenticate and access the over 100,000 data objects we make available through object reference that are available within personal database snapshots or web services requests. In this way, we deliver petabytes in minutes.
Furthermore, we’ve provided the methods to retrieve back results enabling the National Institute of Health (NIH) to make informed decisions on what data is the most important. Our model infrastructure for computational science is truly bringing knowledge from the data now being generated by scientists around the world.