Seismic Library

From Madagascar
Revision as of 14:19, 9 February 2009 by Bert (talk | contribs)
Jump to navigation Jump to search



Imagine a world where seismic data access is a done deal. Because of a de-facto, free, easy-to-use library made by a group of enthusiasts. A truly open, usable and free Open Source initiative. No pile of format descriptions, no fat libraries intended for other purposes, just seismic data - read and write. Something that works from day one.


You can't always get what you want. But sometimes, you get what you need.

  • The fastest-trace-in-town? No chance anyway. But pretty fast is cool.
  • Super-slim super-compressed files? Nah. But deep down we know that size does matter.
  • The richest feature set? Hmmm. Self-describing, extensible is nice.

Real aims

OK, but what aims are 'rock-hard'?

  1. As many people as possible benefit. From acquisition to interpretation, from university to contractor, from researcher to IT professional, from beginning student to the highly experienced software developer.
  2. Data once written will be readable. Always. And you know what you're getting. Always.
  3. The library is easy to use. As easy as possible. Not easier than that.


There are no hidden agenda's. If you're in, you will strive to help the aims set above. Employers, money, old habits? In doubt, you go against those. Just to keep a clean conscience.


You can sit behind your screen and specify formats and interfaces days, weeks, months - until you're blue in the face. Documents grow, and every model you make is more beautiful than the previous. Then finally, your models are good enough so you can start programming. Oops. Within a few days you've proven that your models are wrong. You've learned the hard way what thousands of professionals ignore day-in-day-out: it doesn't work that way.

So, how?

Specifications in our sort of environments develop during actual software development. What you can do, is make one or a couple of documents scanning the problem domain, identifying what kinds of problems are the likely ones that need to be grappled with. Things like: what sort of data and procedures are common in acquisition, processing and interpretation? How does this differ from research to hard-core production work? Or this one: what computer language(s) is/are the target(s)?

Therefore, we'd need input from many sides, from people who are willing to put aside their economic, emotional and historical stakes - but rather approach the problem from the more philosophical side. The result is an inventory of knowledge, tensions, existing techniques, resources, visions, ... and take it from there.

Then, start cracking. The Open Source way is to specify by implementing. A function that works is worth a thousand words of specification. Read about the Agile way, and pick up the good bits. Keep communicating, and improving. Prepare for change; in particular, make sure the data versioning system is brilliant. Contribution is great but keep the evil forces away.