Much ado about scripting, Linux & Eclipse: card subject to change


Discount Fail

I'm having a problem w/ a purchase.

I added a Blackberry Bold compatible 2400mAh battery to my cart, on sale for 20% off ($24.07).

On page two, the price was still 20% discounted:

On page three of the cart, the price jumped back up to the full $30.09 price.

So... is this item on sale, or not?


Feeding The Right Sources To Your Builds

I was recently asked the best approach for how to control the input to one's builds, and while there is no single solution to this that fits all projects, here are two ways you might want to go.

Freeze Then Branch

The current approach for JBoss Tools development involves continuous nightly builds from trunk until such time as the project is deemed frozen in prep for an upcoming milestone or release candidate. At that time, the trunk is copied to branches/JBossTools-3.1.0.M4, and very little is done to that branch - only urgent tweaks and fixes. A new Hudson job is cloned from the current one, and adjusted to point at the branched sources instead of trunk. The maps are also updated to point at the new source location in SVN.

This allows more nightly builds towards an upcoming stable development milestone, while new development can continue in parallel in trunk. When the milestone is released, the unneeded job is disabled or removed.

The only issue with this approach is that all plugins, built from the head of a branch (or trunk) are re-versioned w/ the latest timestamp every time they're compiled. Thus upgrading from one 80M update to the next requires another 80M download. To mitigate this, milestones are released only once every month or two.

Tag And Release

The current approach for numerous projects, such as GEF, is to develop continuously in HEAD and when a number of changes are ready to be released, a tool such as the plugin can be used to, in a single operation, both tag the sources and release those tags into the map file.

This permits a granular approach to plugin versioning wherein only the plugins which have actually changed are renumbered, and thus incremental updates between releases is possible, and if only a single plugin changes from week to week, only that plugin will be upgraded.

This approach also allows your adopters to get more stable, weekly updates rather than hourly or ad hoc nightlies which may include incomplete changes. Sure, you can only watch for good builds in Hudson, but a more predictable schedule makes inter-build communication easier.

The only issue with this approach is that it introduces extra overhead, unless the tag&release process can be automated and run on a (weekly?) schedule. For CVS sources, there is the crontab script available; for SVN, no such script (yet) exists. Want to help? See bug 264713.

Hybrid Approach

With the Athena Common Build, you can set up a Hudson job to run from tagged maps, but also override those tags on demand to force a build from head (or trunk) instead of from the specific repo tags.

To do so, pass in the following flags to EXTRAFLAGS via your build's job configuration / parameters. To build from the workspace's sources instead of fetching anything new from the repo (bug 252774):

-localSourceCheckoutDir /opt/users/hudsonbuild/.hudson/jobs/${JOB_NAME}/workspace/

To fetch from the repo, but ignore tags (bug 251926):

-forceContextQualifier -fetchTag HEAD

Or, like many projects on have done, set up two jobs: one for nightly from trunk, and one for weekly integration from maps. Then instead of doing CI builds when you remember to, they will run automatically when the repo changes so you'll have immediate feedback when something has broken. Add email, IRC, or Twitter notification to the job, and everyone else will know too.


Dash Athena: More Ant, More Tests, More Repos!

  • Infrastructure Changes
  • Cross-Platform / Ease of Use
  • bug 295670 Support running JUnit tests run from Ant script instead of Bash - tests can now be run on Linux and Windows (and to a lesser extent on Mac OSX) using new testLocal build step
  • Bug Fixes
  • bug 292486 Allow builds to fail if unit tests fail using failBuildIfTestFailuresOrErrors=true
  • bug 294678 Categories don't show up with IBM 1.6 JDK - implemented workaround so a different JDK can be used for p2 operations than for compilation of sources: use PACK200_JAVA_HOME=/path/to/different/JDK
  • bug 295773 Non-incubating projects no longer need to set "incubation=" in their
  • bug 292235 Included pre-compiled binary features/plugins are now included by default using PDE's runPackager property; can disable this behaviour with packageFeaturesIncludedBinaries=false
  • Documentation
  • Testing - Different ways to run or re-run tests, including as a secondary process after a build (so they can be run on a different platform or by a different user)
p2 does not natively support remote repo zips; to workaround this the zip is fetched and the URL is rewritten from http://path/to/ to jar:file:/tmp/path/!/

Previous New & Noteworthy | More Athena Docs


Ontario [GNU] Linux Fest 2009

Annoyed that your boss didn't approve that expense to fly over to Germany for Eclipse Summit Europe? Well, fret no more!

Have we got a deal for you!

For only $40 you can attend Ontario [GNU] Linux Fest 2009 - a whole day of open source geekery from the following speakers! And, while there, stop by the Eclipse and Fedora tables for your chance to win some schwag!

I'd like to say we'll be giving away Slap-Chops or Sham-Wows too, but Vince will be rappin' & reppin' elsewhere. :)

HOWTO: Join Architecture Council, Break Hudson, Break Athena

About a week ago, Wayne Beaton approached me with an unexpected question: would I like to be drafted into the rank and file of the Eclipse Architecture Council?

I was honoured. Flabbergasted. And a bit apprehensive: could I measure up?

Well, it seems my first act as an AC member was, Thursday night, to cripple our shared Hudson build server by dumb-assedly performing a seemingly innocuous update. Heartfelt thanks to the Webmasters for once again extracting my cheeks from the fire and getting everyone's builds back up and running.

My second act, Friday afternoon, was to commit some "fixes" to the Athena Common Build code which effectively prevented it from reading your project's Thankfully, the ever-vigilant Dave Carver spotted the snafu, filed a bug, and I had it my error fixed shortly after my return from Zombieland last night (aside: best zom-rom-com since Shaun Of The Dead!)

Thanks to Matt, Dave C, David W, Denis, Gunnar, and everyone else for reporting &/or fixing these issues, and to the AC for welcoming me into their fold. It's going to be interesting!


Dash Athena News, Oct 2009

Have you heard the latest? Here's what's been going on in the Dash Athena project lately.

Infrastructure Changes

  • There are now 43 Athena jobs on! Of those, 30 are green, 1 is yellow, and 6 have not yet been enabled. These jobs represent 29 different projects' builds! 6 of them use SVN sources instead of CVS.

  • bug 257074 comment 12 now has SVN 1.6.5; if your map files no longer work (your build complains it can't resolve plugin or features sources) then read this.

New Features

  • bug 291446 Provide hook for extra actions after fetching code from repo and before compiling it (e.g. code generation, parser generators, etc.)

  • bug 275529 Athena is now a full Project rather than a Component! Now if we could just get someone to design a logo... Do I need to offer up prizes? I will, if you comment on bug 272723 with some design ideas (or prize ideas).

Better Documentation

See also Older New & Noteworthy.


HOWTO: Enable Zimbra filtered message folders on Blackberry w/ BES service

Thanks to the folks at my IT helpdesk for this tip on how to enable your BB to get mail from within Zimbra folders:

  1. Go to your Messages folder, hit menu key and select Options.
  2. Go to Email Settings. Hit menu button, select Folder Redirection.
  3. You will see your mailbox which can be expanded. Expand your inbox and select the folders you want to be sync'd (on my device they're blue instead of grey). Hit menu button, select Save.

Any NEW messages routed to those folders will now be located in the folders on the device as well. As new messages, they'll appear in your Messages folder along with SMSs, MMSs, BBIMs, and other email. Once read, they'll disappear into the appropriate folder(s).


Re-Return to Athena

Thanks to Miles for taking the time to slap Athena into submission on his local system - frequent and regular stress testing is what's going to make this facade over PDE bridge the gap between today and the far-flung future (namely B3).

Because he took the time to itemize his concerns / problems / gotchas, I thought I'd take the time to explain why these happen... and which are bugs vs. features. Speaking of bugs and features, Athena has a handy New & Noteworthy wiki page which I update about once a month. If you've never seen the N&N, it's here.

# skipPack=true is useful if you want to test locally. I found that the update manager does not work with pak'd jars when running locally. Perhaps P2 is relying on something on the web server side..?
If p2 isn't properly unpacking pack'd jars, the problem is either: you're using a JDK with pack200 bugs (like Sun 1.6), or you have jars which should never be packed. How can you exclude jars from packing? See JarProcessor Options.
# The build scripts appear to be simply searching for the occurrence of strings to determine wether a given map entry type is being used. They do not appear to respect the comment lines. I guess I should file a bug report on this one -- it cost a bit of time because I have a dodgy SVN setup on my home laptop. I'm actually still not sure what is going on with that (and does the mysterious Java 134 error msg have anything to do with it) but I'm trying to learn not to fix things for which I have a work-around, i.e. if ti works on a different machine, just use that instead!
The purpose of the collectMapTypes task is to simply know what types of maps are being used so that warnings can be echoed to the user if appropriate. If you use SVN on linux, there's a good chance you'll get this message:
[echo] If you get a SIGSEGV or Java Result: 134 when fetching from SVN, remove 
[echo] subversion-javahl and svnkit to force PDE to use commandline svn client.
Why? Because it's a pain in the butt to debug, and at least having a message in the build log gives you some tea leaves to read. I favour documentation over performance in this case.
# There is a somewhat lengthy description at of how to setup your local build to use a local cache CVS site but I'm not sure what scenario that would really be helpful in. I just use COPY and it works fine. I suppose that using the CVS approach might excercie some issues that you wouldn't run into with a plain COPY. And I'm not sure wether the local build copies over the binary files or not. If it does, you could get different build results if say your local environment happens to place some artifacts in the copied directories that aren't cleaned out by the PDE build. Not going to worry about it!
A more useful scenario is to take a dump of a CVS, SVN, or Git tree, and point your build at that folder using localSourceCheckoutDir=/path/to/full/source/tree/root in your file. Then it'll build against that local source cache instead of having to fetch from the repo. Once you have the dump of the repo, from Eclipse do File > Import > Existing projects, and you can have those same projects compilable/editable in your workspace. You can also use Eclipse (eg., with Subversive, Subclipse, eGit) to update the cache before doing a build.
# A useful hack when building locally is to pack up your difficult dependency(s) into a zip file and refer to them in your dependencyURL. It doesn't matter what you use for the URL, because it will use the cached file instead. For example, GEF3D doesn't yet have a build or update site. (We've been collaborating a bit on this so hopefully the build I've got working will help them get one up soon.) So instead of solving the the SVN stuff right away, I put that off until everything else worked. So I created a zip and added an entry for You have to be careful to have your zip file packaged up correctly or it won't work. That's easy to get wrong..your file structure needs to conform to the standard structure, i.e. eclipse/plugins/some.plugin. On the other hand, you have to be careful to remove these fake dependencies when you begin testing the real fetched versions! The fetch part then fails silently; your build succeeds but using the previously downloaded files.
I generally add comments (TODO, FIXME) in my to mark temporary hacks, or use a new file, like so I'm not breaking the server-friendly version. (Just make sure you tell build.xml which file to use, if not the default one -- and DON'T CHECK IN the changed file.)
# But in any case getting the SVN map file right has been a major headache. Again, the only reason I want this is to grab the GEF3D sources, but I can't seem to let this one go as it seems that it should be so easy. The Athena build seems to mangle things in the fetch script; or perhaps this is a PDE Build issue. So I tried the sourceforge style entry and after a couple of hours of fiddling (see below) I found a magic incantation that seems to work -- plugin@org.eclipse.draw3d=SVN,trunk,,,org.eclipse.draw3d@HEAD
@HEAD is optional; you only need to specify a tag if it's not the HEAD of the branch or trunk. Here's a sample SVN map.
# It would be super nice if there were a built-in dependency analyzer so that one could find the root plugin issues easily. As it is, if you have a bad dependency you get the whole tree and you have to walk the tree by hand to find out the root cause. Overall I think this is another demonstration of the ultimate weakness of the standard build systems out there. They are by nature linear (batch) approaches and so usually you can't do post hoc or inline analysis of issues. There is no semantic information in logs! You end up having to parse them back if you want to do any useful analysis. In a parallel lifetime, I would love to work on a simple semantic logging system that would allow structured analysis of this sort.
I believe the Buckminster guys are doing work in this area, so this may make its way into B3.
# Here is something else that I would love to do given time. The map files are pretty well structured - it would be nice to whip up a quick XText editor that would at least check syntax and more ambitiously provide a mechanism for hooking in semantic checks -- wouldn't it be nice if you could check that that SVN url is correct while editing the map file itself?!
Add that to the wishlist for B3!
# I'm still not sure how Athena determines what goes in the sdk vs. runtime vs. example zips so for now I'm just building the examples manually. Anyone come across documentation for this?
This arbitrary allocation of files is done in common.releng/tools/scripts/buildAllHelper.xml, lines 1063-1097, and is only done if the build.steps entry 'buildZips' is enabled. This behaviour is as far as I'm concerned deprecated, but in place simply to ease transition for existing builds using the old Common Modeling Build to the Athena Common Build. IMHO, no one needs these zips - you only need an update site zip. If you don't like what's in the zips, you can omit the 'buildZips' step and package your own zips by hand, eg:
 <zip destfile="${buildDirectory}/${buildLabel}/${SDKZip}" update="true">
   <zipfileset src="${buildDirectory}/${buildLabel}/${allZip}" dirmode="775" filemode="664" excludes="**/*.pack.gz, **/, **/features/*.jar, **/${domainNamespace}.*.all*, **/${domainNamespace}.*.all*/**" />

(For those unfamiliar with the story of the Re-Return, you need to watch HIMYM, S1E15.)


Opera 10 as Hudson Helper

As there's still no Hudson Helper for Blackberry (David: hint, hint!), I've been forced to do my own monitoring in a browser (when not monitoring by email, that is).

Now, with Opera 10 (and its new tab layout options), it's even easier to monitor multiple builds via a single view:


HOWTO: AVI to DVD Conversion, Part 2: Merging Multiple AVIs

Converting a single AVI to DVD format is easy.

However, if you want to merge multiple files into a single DVD image, you must:

  • Convert the AVI files to MPEG:
    for f in $(ls /path/to/season1/{GI_Joe.S1E0*.avi,GI_Joe.S1E1*.avi,GI_Joe.S1E2{0,1,2,3}*.avi}); do \
      g=${f/.avi/.mpg}; g=${g/season1/season1_mpg}; \
      transcode -i $f -y ffmpeg --export_prof dvd-ntsc --export_asr 3 -o movie \
        -D0 -s2 -m movie.ac3 -J modfps=clonetype=3 --export_fps 29.97; \
      mplex -f 8 -o $g  movie.m2v movie.ac3; \
      rm -fr movie.m2v movie.ac3; \
    done; \
    cd /path/to/season1_mpg
  • Next, using a more complex dvdauthor.xml file...
    <dvdauthor dest="DVD_Season1_Ep01-05">
      <vmgm />
             <vob file="GI_Joe.S1E01.The_Further_Adventures_of_G.I.Joe.mpg" chapters="0,8:00,16:00,21:00"/>
             <vob file="GI_Joe.S1E02.Rendezvous_in_the_City_of_the_Dead.mpg" chapters="0,8:00,16:00,21:00"/>
             <vob file="GI_Joe.S1E03.Three_Cubes_to_Darkness.mpg" chapters="0,8:00,16:00,21:00"/>
             <vob file="GI_Joe.S1E04.Chaos_in_the_Sea_of_Lost_Souls.mpg" chapters="0,8:00,16:00,21:00"/>
             <vob file="GI_Joe.S1E05.Knotting_Cobras_Coils.mpg" chapters="0,8:00,16:00,21:00"/>
    ...merge the MPEG files into a single disc image:
    dvdauthor -x dvdauthor_s1e01-05.xml
  • Verify the video and audio will play.
    xine dvd:/full/path/to/DVD_Season1_Ep01-05
  • Burn the DVD.
    growisofs -Z /dev/dvd1 -dvd-video DVD_Season1_Ep01-05/

Yo, Joe!


We Don't Need Another Repo

Re Wayne's blog about an Maven Repo:

EMF has had an undocumented/unmarketed Maven2 repo for about 4 years now.

All you need is to take an update site zip, unpack it, rearrange the folder structure, and rename the jars. Then you create little XML files called .poms to describe the jars in the tree, and Maven-aware tools can read the tree. It's fairly trivial. is the URL, IIRC. About once every 2 years someone asks about our providing such a repo, and I give out the URL. Clearly not a huge demand for it.

One might argue that creation of such a folder structure is in the purview of Athena's publishing scripts, which today eases the process of copying your bits to, then unpacks your Update site so it can be scanned by p2 rather than downloaded as a single archive. It too is fairly trivial. I would not be adverse to converting my existing shell script for the EMF repo creation into a generic Ant script for use by Athena users.

Frankly though, I think it would be more valuable if the m2eclipse folks added support for reading/converting p2 repos. Publishing yet another file format would require another release-train-like workflow (we already have two: EPP and buckybuilder for Galileo) and more people maintain it. Even if every project published their own maven repo, we'd want for the sake of ease of use to aggregate them into a central place for easier navigation and discovery by maven tools. So, like with Ganymede, we'd have each project's bits copied to two places on disk for each build. (Galileo used composite repos to POINT at project repos rather than copying them saving tons of disk space and CPU cycles. AFAIK, Maven does not support this concept, but I could be wrong.)

There's also another benefit to having tooling to support converting from p2 repo to maven2 repo: the aggregate repo could be housed at and suck THEIR bandwidth and support resources instead. Thus just as is upstream from Fedora's Eclipse project .rpms (which are upstream from Debian/Ubuntu's .debs), p2 repos could be upstream from Apache's Maven repo(s). After all, Apache already collects maven artifacts for projects to facilitate the use and adoption of maven, so this is entirely in line with their standard operating procedures.


Posted from Blackberry using Opera Mini, by the side of Shawnigan Lake, Vancouver Island, BC


My love-hate with SVN, Part 8: Unprotected & Unhidden Metadata

SVN metadata appears when searching

Why would anyone ever want to see .svn folders and their children in Eclipse? If they're hidden from the Package Explorer, why can't I hide them from a Search? And if I did a naive find and replace, wouldn't this corrupt the metadata and prevent me from committing the update to the repo?


Hudson & Virtual Box, Part 2: Creating a common share with vboxsf

My experiment last week with sshfs came to a grinding halt when I realized that while the share works, and can be automatically started, the permissions do not work and Hudson can't actually use the sshfs share; also, I couldn't get sshfs to compile for OpenSolaris, so it's really a non-starter.

Next up was mount_nfs, but after fighting with that, /etc/nfs.conf and /etc/exports for a couple days off and on, I realized there's a much simpler solution using Virtual Box itself: vboxsf. Note that it's sf (shared folder) not fs (file system).

Thanks to David Herron for the inspiration.

Here's how to set up a vboxfs share:

  1. Ignore the Virtual Box wiki which suggests that you can't just use a Shared Folder: you can.
  2. Guest: Launch your guest, then install the Virtual Box Guest Additions onto it.
  3. Host: Create a user and set its uid and gid to some value which can be created on the guest / slave. For example, on Mac OSX 10.5 Server, launch System Preferences > System > Accounts, and create a "hudsonbuild" user with User ID and Group ID set to 500.
    Note that uid 500 and gid 500 are the defaults for the first user on a Fedora system; on Ubuntu it's 1000; on other systems YMMV.
  4. Host: Create a folder in the root of your host called /shared. Put some files there. Give it full write permissions, and set its ownership to the user who'll be sharing the files:
    find /shared -type f -exec chmod 666 \;
    find /shared -type d -exec chmod 777 \;
    chown -R 500:500 /shared
  5. Host: Add a Shared Folder. NOTE: If your guest is running, shut it down or see the next item. From the Virtual Box GUI (on the host), select Settings > Shared Folders > New (+). Add a shared folder called "shared" mapped to /shared (or call it something else and map it to a different path on your host).
  6. Guest: Alternatively, you can add a Shared Folder to a running guest from its Devices menu. Add a shared folder called "shared" mapped to /shared.
  7. Guest: As root, create a folder on the guest called /shared. Leave it empty, but make sure it's owned by your user.
  8. Guest: As root, add this to the guest's /etc/fstab:
    shared  /shared vboxsf  auto,exec,rw,uid=500,gid=500    0       0
  9. Guest: As root, mount the share one of three ways:
    # mount everything in /etc/fstab
    mount -a
    # mount using mount (and optional -o options)
    mount -t vboxfs -o auto,exec,rw,uid=500,gid=500 shared /shared
    # mount using mount.vboxsf (and optional -w and -o options)
    /sbin/mount.vboxsf -w -o rw,uid=500,gid=500 shared /shared
  10. Guest: If this mount doesn't start automatically when the VirtualBox guest is started, you can add one of the above commands to the script you use to start the Hudson slave, eg:
    # Init file for Hudson server daemon
    # chkconfig: 2345 65 15
    # description: Hudson server
    # HOWTO:
    # 1. rename this file on target server (Hudson slave node) to /etc/init.d/hudson
    # 2. to enable this script to run on startup, as root type the following:
    #    Fedora: 
    #        chkconfig --add hudson; chkconfig --list hudson
    #    OpenSolaris:
    #         for n in 0 1 6;   do cd /etc/rc${n}.d; ln -s /etc/init.d/hudson K15hudson; done
    #         for n in 2 3 4 5; do cd /etc/rc${n}.d; ln -s /etc/init.d/hudson S65hudson; done
    #         svccfg add hudson; svcs | grep hudson
    #run Hudson as a slave node
    java -jar /opt/hudson/slave.jar -jnlpUrl \
      http://vboxsvr:8080/computer/${slaveNodeName}/slave-agent.jnlp &
    /sbin/mount.vboxsf -w shared /shared &
    exit $RETVAL


Hudson & Virtual Box: Creating a common share with SSHFS

I've been working a recently with my own Hudson server, setting up Fedora and OpenSolaris images as slave nodes to prototype new ways to build. Tonight I found a very handy trick for mounting shared drives using SSHFS. Kudos to the man page writers -- all the tips I found via Google were outdated or just plain wrong. man sshfs to the rescue!

Step 1: create a folder on the Hudson master / Virtual Box host with files you want to share, eg., /shared

Step 2: create an empty folder on each Hudson slave / Virtual Box guest as a mount point for the shared folder, eg., /shared

Step 3: on the guest, install sshfs, eg., sudo yum install fuse-sshfs.

Step 4: mount the share: sshfs /shared

Of course you'll probably want to set up ssh keys and add this to your startup scripts so that the mount happens automatically when the guest is launched, but I'll leave that for next time.


Athena Common Builder: Thanks to the early adopters!

Here's a quick list of some of the projects @ currently using Athena. If you haven't tried Athena, maybe it's time!

  1. Linux Tools
  2. Visual Editor (VE)
  3. Voice Tools (VTP)
  4. PDE Declarative Services Modeling Incubator (pde.ds.modeling.incubator)
  5. Nebula Widgets Gallery
  6. Faceted Project Framework (fproj)
  7. EMF Query (EMF MQ)
  8. EMF Validation (EMF VF)
  9. EMF Transaction (EMF MT)
  10. Ajax Tools Framework (ATF)
And @, all available in JBoss Tools 3.1.0.M2 for Eclipse 3.5 (Galileo) ...
  1. JMX Console
  2. BPEL Editor*
  3. jBPM 3 and 4
  4. FreeMarker
  5. name but a few.

* - will be available in M3 or you can get a nightly build here.

What are people saying about Athena?

"[I] really like the new build. It is much less confusing then the old [Common Modeling] one."

"[C]ongrats for this builder. It is quite good and I'm eager to rely fully on it."

"We are playing around with Athena and finding it really useful. It is already deployed to one of our customer developments. Thanks a lot for the hard work! We will definitely try to contribute back anything useful we will have in the process."

So, how do *you* get started? Here's a FAQ. Here's our New and Noteworthy. And here's all the rest of the knowledge base articles.

Oh, and I'd be remiss if I didn't mention that there's a big list of requested features still waiting for contributions. If you use Athena, you're already half-way to contributing back. Want to help? Drop me a line any time and we can discuss what holes need filling that match your skills and most directly improve your use of PDE, p2 and Athena.


Using Hudson for parallel Athena builds

This week I set up a multi-configuration Hudson job to try doing EMF's Query, Validation, and Transaction projects as a single job (instead of the traditional three linked jobs), then Bernd took over to do most of the work required to migrate to the new, simpler build system... and unearthed a few sticky bugs along the way. For example, Athena now supports running from jarred test plugins, as well as test plugin folders. (Yes, I'm amazed no one ever tested that use case before now either. Anyway, it works now.)

Doing these builds in parallel (rather than serially) may be a bad idea, but for now, it lets the Q,V,T developers build all three linked components via a single build button, and if the sources for any of the three changes, all three will be rebuilt and tests will be run.

Since Transaction depends on Query and Validation, this is certainly useful; however, Query and Validation do not depend on each other, so there may be some extraneous build churn with this set up.

Still, it's nice to see a the status of all three builds in one place:

What's next? Well, hooking up FindBugs, eliminating all the old SDK/runtime zips (because we have the new, shiny, p2 update site zip instead, and you can convert from repo to "runnable" SDK easily using this script), and figuring out how to make the builds use each others' newer binary output instead of building from a known entity (Query 1.4.0.N.ten.minutes.ago vs. 1.3.0.R.last.month). Luckily, Athena supports building against p2 repos and Hudson has APIs for fetching the latest good output, so this should be as simple as configuring the Transaction build to fetch from the latest successful Query zip using properties like these:


Of course if you want to build against the latest PUBLISHED update, you could do this:



My love-hate with SVN, Part 7: Setting svn:ignore properties

If you've inherited code where someone has accidentally committed *.class files, here's how you can find those bin/ folders, wipe them out, and hopefully never see them in the Synchronize view again.

Sure, you could use Team > Add to svn:ignore... on a file or Team > Set Property... on a folder in Eclipse using Subversive, but this is faster and less clickity-click.

for d in $(find -maxdepth 1 -mindepth 1 -type d ~/workspace/ -name "org.eclipse.*"); do
  pushd $d
  svn up
  svn propset svn:ignore "bin
bin/*" .
  svn propget svn:ignore . 
  svn propset svn:ignore "*.class
**/*.class" bin
  svn propget svn:ignore bin
  cd bin
  rm -fr *
  svn delete org
  svn delete model
  # find other deleteables with `svn status`
  cd ..
  svn ci -m "svn:ignore"

HOWTO: generate .m3u playlist from .mp3 directory

Been trying to find a solution to this one for ages. Turns out it's stupidly simple - just dump the results of a find into a file.

echo "Create playlist for $1 ..."
if [[ $2 ]]; then list="$2"; else list="$1"; fi

pushd "$dir" 2>&1 >/dev/null
find . -type f -name "*.mp3" > "$list.m3u"
echo "Found these files:"
cat "$list.m3u"
popd 2>&1 >/dev/null

With or without the "./" prefix on each found file, the resulting .m3u files work on my Blackberry (and are found automatically), including nested folders and paths with spaces. To run the above, just do this:

$ ./ "artist folder/disc folder" "playlist name"


My love-hate with SVN, Part 6: Installation Ease Of Use

For months I've been annoyed by the fact that installation of Subversive (or Subclipse) requires fetching features and plugins from 3 or more update sites. No more!

Today, as an exercise to learn how to use the <p2.mirror/> task and provide a reproduceable, offline way to get Subversive into a virtual machine, I've created an update site zip, complete with site.xml and p2 metadata, which can be used to install Subversive from a single source. Here's the Ant script if you'd like to try this at home.

Because let's be real: you can only complain so long before it's time to roll up your sleeves and pitch in, right? That's how open source survives - thanks to people who care enough to complain AND care enough to help.

Here's the 13M update site zip, which includes the following:

Subversive 0.7.8
SVN Connector 2.2.1
SVNKit 1.2.3
JNA 3.0.9
ECF 3.0.0

Any problems, please report them in bug 284077.


HOWTO: Burn ISO image to DVD w/ linux commandline

When burning an ISO image to disc, I simply use this:

$ growisofs -dvd-compat -speed=1 -Z /dev/dvd=disc.iso
Using -speed=1 takes a little longer than the default "as fast as possible" mode, but guarantees the disc can be properly read in the pickiest of drives (eg., a Wii DVD drive). DVD-R (not DVD+R) is also recommended.


HOWTO: Be full of C.R.A.P.

In part 1 I rambled on at length about what I think needs to be done to prove yourself to a project team in order to become a committer on tht project.

So, what the crap's up with being full of C.R.A.P.? I'm not referring to the four principles of design (Contrast, Repetition, Alignment, Proximity), though there are some similarities here.

For me, being full of C.R.A.P is about the transition from one state to another:

to Committed

How do you move from one state to another?

Give a crap, Clean up some old crap, Make some new crap, Now you're C.R.A.P.!

Simple, right?

'Till next time...

HOWTO: Becoming an open source project committer

The Tweetosphere/blogosphere has been buzzing with discussions about what one needs to do to be a committer @

I got my rights by working for IBM and being handed the keys to the Porsche when I started working at the Toronto Lab as a member of the EMF team, oh so many lunar eclipses ago. No longer with IBM, I'll retain my committerships until I manually ask to be removed, or they claw 'em from my cold dead hands. After all, what's a revised patch but a 2nd Amendment? (Aside: seriously, people, it's 2009. You don't need a gun. There's no Imperial Army coming to steal your land. LET IT GO.)

For most committers, however, you can't just be appointed to the job; you have to earn it. So, here are my tips for getting on *my* project, the Athena Common Build.

  1. Easiest way to get on the project: be invited by someone already on the team by personal recommendation (see criteria below). Others can +1/-1 the suggestion based on the criteria below, but in my experience with other projects, no one ever vetoes a nomination. (I've seen it once, and it only delayed that person's committership by about a month.) So cozy up to the existing committers, and you're in. Why is this? Because it's OPEN source, and how can you be open if you exclude people who want to contribute?
  2. The nominee must use the project at least weekly, if not daily. For Athena, this means you have to be actively writing Ant scripts, doing builds, or at least be active in PDE or p2 development. Why is this important?
    a) I don't want "dump and run" code which I'll then have to maintain, and
    b) if you're not a user, you can't intelligently decide what pains exist and which are important to solve
  3. I'd like to see two accepted patches to prove you've got the technical skill, and that you're willing to thrown down and help with existing known issues - see 2 (b) above.
  4. If you're not technical (or not *yet* technical), then you need demonstrated skills or commitment, or have worked in a related field with someone mutually known who can vouch for you.
So, what constitutes "commitment?" Lots of things...
  • show up to meetings
  • comment on or write bugs, blogs, wiki, articles, recipes, HOWTOs, newsgroup, mailing list, IRC
  • submits patches or test cases
  • help triage bugs
  • mentor students (GSoc or other)
  • runs contests, does viral marketing, etc.

Now, of course, these items are not all mesaurable, but if people know you're involved, and you'd like to be a committer, you'll likely be voted in. (Many people trying out Athena may have noticed I've offered them committer rights in exchange for code or doc contributions. So far, no takers, but the offer stands.)

Frankly, I'd rather have more people as committers who do little to the code base but who have the power to do so when needed. For example, (if the data is accurate) Kim's only committed 48 LOC in the past 9 months, compared to my 80,000 LOC (seriously, that can't be right) - but what she, Andrew and Andrew have done has been invaluable. And, often much more valuable, they've all helped out with with advice in bugs. Thanks!

Good planning trumps code any day.

Continued in part 2

Tracking Build Status With Hudson Data APIs

A number of people have been twittering recently about Hudson Helper, and the fact that it can't (yet) support http access to Hudson servers. (There's just no pleasing some people, eh David?)

UPDATE: David reports that Hudson Helper has worked with both http and https since day one. He invites direct feedback if you're having problems.

To help fill this gap, I'd like to detail some of the handy API features of Hudson I've discovered since I first started using it back in October, which cane be fetched via http (or https) in a browser or via a script.

Datum Example
Latest Successful build number buildNumber
Latest Successful zip (published artifact) GEF-Update-*.zip
All checked out Project Set Files (Hudson workspace) *.psf
XML Digest of Latest Stable Build lastStableBuild/api/xml
SVN revision used for Latest Stable Build //changeSet/revision/revision

For more on the APIs available to the Latest Successful, Stable, Failed, or in fact simply the Latest Build, see:

  1. /lastSuccessfulBuild/api/
  2. /lastStableBuild/api/
  3. /lastFailedBuild/api/
  4. /lastBuild/api/

Of course, should you want details on a specific build rather than the latest, you can replace the "last*Build" token with an actual build number.

Finally, because no post about APIs should be complete with out some script showing how to exploit that interface, here's a quick example of how to fetch the latest successful, and as yet unreleased Drools 5.1 runtime library zip for use in our JBoss Tools 3.1.0.M2 builds. In this example, we fetch the build number for the last successful build and compare it to a cached version. We also fetch and cache the latest SVN revision number (in a file) so that we can later fetch Drools sources from the same point in time as the precompiled Drools binaries in the zip. This guarantees we're building from trunk, but only a good build in trunk, skipping over any failed builds or intermediate states (partial commits).


buildNumOld=0; if [[ -f $droolsSNAPSHOTnum ]]; then buildNumOld=$(cat $droolsSNAPSHOTnum); fi
buildNumNew=$(wget -q --no-clobber -O - http://jboss-hudson-server/hudson/job/drools/lastSuccessfulBuild/buildNumber)

buildRevOld=0; if [[ -f $droolsSNAPSHOTrev ]]; then buildRevOld=$(cat $droolsSNAPSHOTrev); fi
buildRevNew=$(wget -q --no-clobber -O - http://jboss-hudson-server/hudson/job/drools/lastSuccessfulBuild/api/xml?xpath=//changeSet/revision/revision)

if [[ $buildNumNew -gt $buildNumOld ]]; then
 # get: 27013; must change to 27013 
 echo $buildRevNew > $droolsSNAPSHOTrev;
 sed -i "s#\|##g" $droolsSNAPSHOTrev 
 buildRevNew="$(cat $droolsSNAPSHOTrev)"; #echo "."$buildRevNew"."
 # replace "defaultTag=trunk:\d+" with defaultTag=trunk:${buildRevNew} in
 #  defaultSvnUrl=
 #  defaultTag=trunk:27013
 sed -i "s#defaultTag=trunk:\d\+#defaultTag=trunk:$buildRevNew#g"; # grep "defaultTag=trunk:" 

 echo $buildNumNew > $droolsSNAPSHOTnum; 
 echo "Download $droolsSNAPSHOTurl ..."
 wget -q --no-clobber -O $droolsSNAPSHOTzip $droolsSNAPSHOTurl 

 # ...

Oh, and BTW, if you're ever looking for the latest hudson.war executable, it's always here.


Workin' For The Wiikend

After acquiring my first DriveKey-powered "try before you buy" Wii game via torrent (and having a little fun fighting the Joker's minions off while occasionally blowing Robin into his component bricks with a well-placed BatBomb), I decided tonight to do a little more hacking. Thanks,!

So, with the wife out watching some chick-flick w/ a friend, I got to spend a few hours playing with the HomeBrew Channel on my Wii. Very cool stuff available, from game emulators & ports, to new games, media players, and utilities. Complete list here.

To set up the HomeBrew Channel, follow these steps, including installation of the DVDx application so your Wii can play video DVDs.

Then, install the HomeBrew Browser, and grab some more software. After numerous tests, crashes, and reboots, I found that the best three options for playing video are these, all available through the HomeBrew Browser or via manual download from

Here's what I tested:

App 2G SD card w/ .mp3 Bus-powered 2.5" 500G USB drive w/ .avi DVD-R w/ .avi DVD-R Video DVD (burned w/ growisofs from dl'd .avi torrent) [1] Video DVD (original, possibly DVD-DL? or DVD+R)
GeeXboX (embedded linux) Y Y N N N
MPlayer CE Y Y Y Y N
MPlayer TT Y Y N
So, while I have scripted the process for easily converting .avi to DVD, I now no longer need to do so -- I can just plug my USB drive directly into the Wii and watch it on the big screen w/o having to waste hours in format conversion. Wii!



It's taken a while, but I've managed to get some metrics for how much mail I actually process.

Here's my inbox 3 weeks ago before I went on vacation for a week, then went without VPN access for a few days. The xkcd strip is particularly appropos.

Here's that same inbox today, sporting a newer version of Thunderbird. Note the pileup of over 1,000 emails in three weeks, in just ONE of the mailing list filter/folders I monitor.

So, other than filtering by sender & subject, automatically marking my own mailing list replies read, colourizing emails to make the more important ones stand out, and using "Show Unread Threads" view filtering ... what else can one do to manage the deluge?

Does anyone have any good, realistic strategies for dealing with 1000s of emails a month?

Simplified Win XP Pro EULA
-- Reminds me of the WTFPL license...


Mac OS X - VPN vs. LAN: DNS Royal Rumble

I've been "sharing the Mac experience" for the past day trying to get access to my local LAN and VPN concurrently. So far, it's only one or the other, but never both at the same time.

I've tried the Cisco client, the Shimo client, vpnc (compiled from scratch with and without openssl support), vpnc 0.5.3 from DarwinPorts, and even this custom bit of script I wrote based on some tips about using scutil.

# goal here is to collect the DNS entries from the active services and merge them into the Global list


# get IPs from services using scutil
function getIPs ()
        keys=$(echo "list State:/Network/"$1 | scutil | awk '{print $4}')
        for f in $keys; do
                echo "> show $f"
                printf "get "$f"\nshow "$f | scutil | grep "\."
                echo "show $f" | scutil 2>&1 | grep "\." 2>&1 | \
                  awk '{print $3}' 2>&1 >> $tmpfile
        #cat $tmpfile
        IPlist=$(cat $tmpfile | sort -r 2>&1 | uniq 2>&1)
        for i in $IPlist; do
                return_IPs=$return_IPs" "$i
        #echo $return_IPs
        rm -fr $tmpfile

function setIPs ()
        IPs="$2"; # echo $IPs
        printf "get State:/Network/$label\nd.add ServerAddresses *$IPs\nset State:/Network/$label" | scutil
        echo "> show State:/Network/"$label
        printf "get State:/Network/"$label"\nshow State:/Network/"$label | \
          scutil | grep "\."

echo "--- BEFORE ---"
getIPs "Service/.+/DNS"

echo ""; echo "--- AFTER ---"
setIPs "Service/" "$IPs"
setIPs "Global/DNS" "$IPs"

mv /etc/resolv.conf /etc/resolv.conf.bak
for i in $IPs; do echo "nameserver $i" >> /etc/resolv.conf; done
# ./ 
--- BEFORE ---
> show State:/Network/Service/F1C45B82-45A1-4F44-89AC-82102F187F0B/DNS
    0 : 192.168.x.y
> show State:/Network/Service/
    0 : a.b.c.d
    1 : e.f.g.h

--- AFTER ---
> show State:/Network/Service/
    0 : 192.168.x.y
    1 : a.b.c.d
    2 : e.f.g.h
> show State:/Network/Global/DNS
    0 : 192.168.x.y
    1 : a.b.c.d
    2 : e.f.g.h

Obviously, since it's a Mac, there's got to be a dead-simple way for this to work. Anyone know how?


Learning to Love the Mac, Part 2: Mouse Tips & Desktop Management

I have an 8-button Logitech MX500 optical mouse, and this week is the first time I've ever successfully mapped functionality to all the buttons. Windows did a reasonable job with a few of the buttons; Linux doesn't support anything beyond the first three; Mac OS X Server just gets it done.

Out of the box, my third button (scroll wheel) is mapped to the seemingly pointless Dashboard, which is a huge pain when you're used to middle-clicking to open a link in a new tab or to copy/paste text in a console. To get that functionality back, go to Applications > System Preferences > Exposé & Spaces then remove Mouse Button 3 from the Dashboard's "Hide and Show" feature.

Next, I set Mouse buttons 5, 6, and 8 to All windows, Application windows, and Show Desktop.

But even cooler than these is Spaces, though as yet I can't find a way to replicate Gnome or XFCE's ability to move windows from from Space to Space which lets you drag open app windows from Space to Space which in the Spaces view (F8). Still, having up to 16 virtual desktops is very handy, particularly when you need to virtualize Windows and Linux. If you want to be able to have console windows on all Spaces rather than having them all collected on a single Space, uncheck the "When switching to an application, switch to a space with open windows for the application".

My love-hate with SVN, Part 5: Fedora 11 + Eclipse 3.5 + Subversion 1.6

Finally figured out how to make Eclipse 3.5 play nicely on Fedora 11 w/ Subversion, and I owe this bit of knowledge to our new MacPro. *sigh*

I also owe a great deal of gratitude to Cloudsmith for providing their Cloudsmith Galileo+ repository, which includes these features:

I still wish the version numbers would better align, in that I have to install the SVN Team Provider v0.7.8 with the SVN Connector v2.2.0 and the SVNKit 1.3.0 implementation v2.2.0 to make all this work with Subversion 1.6. Oof.


Learning to Love the Mac: 13 Tips

A month ago a very large package arrived in the mail: my first MacPro server. I at once fell in love with the case design - clean, simple, and dead-easy to take apart in order to add more drives and RAM. However, that's where the love boat ran aground.

To say it's been a gradual learning curve would be an understatement. Here are a few things I've learned over the past month of dealing with Mac hardware and OS, as well as retraining my fingers to use Mac keyboard bindings (META = Apple Key or Windows Key, depending on your keyboard).

  1. Use META-TAB instead of ALT-TAB to cycle applications
  2. Use META-LEFT/RIGHT instead of HOME/END to jump to start/end of a line
  3. Use ALT-LEFT/RIGHT instead of CTRL-LEFT/RIGHT to jump to prev/next word on a line
  4. META-A, META-X, META-C, META-V replace CTRL-A, CTRL-X, CTRL-C, CTRL-V for select all, cut, copy, & paste. META-L, META-T, META-N replace CTRL-L, CTRL-T, CTRL-N (jump to location bar, new tab, new window). But CTRL-TAB still switches tabs. However, if you have multiple Firefox windows open, there is no way to toggle between them with the keyboard. Same problem with multiple Terminal windows. META-TAB only switches between groups of applications, but not windows within an application.
  5. Sometimes ESC works to dispose a dialog; sometimes only clicking the red X works.

  6. Q replaces qemu, but doesn't seem to work very well for my existing vmware or Virtual Box images
  7. Virtual Box rocks on Windows, Linux and Mac

  8. XCode provides gcc, make, etc.
  9. Fink and DarwinPorts replace Debian/Ubuntu's apt-get and Gentoo's emerge, respectively. Once XCode and DarwinPorts are installed, you can port install vpnc (to fetch deps and compile on the fly) or apt-get install curl (to fetch deps and install).
  10. rEFIt replaces grub, and more or less works as I'd expect. /efi/refit/refit.conf approximately replaces /boot/grub/menu.lst at least as far as picking what partition to default-boot and how long to wait

  11. Java is in /System/Library/Frameworks/JavaVM.framework/Home instead of /opt/ or /usr/lib/jvm/java
  12. Subversion was easier to set up on Mac (using Fink) than on Fedora 10 (using yum), especially since there's now the Galileo+ Update Site from Cloudsmith so you don't have to download from multiple update sites to get it installed.
    However, the version of Subversion available via Fink doesn't work with projects checked out using Eclipse - seems that the commandline client (Subversion 1.4.4) and Subversive with SVNKit (SVN 1.6.1 w/ SVNKit 1.3.0.beta.r5741) are not compatible: svn: This client is too old to work with working copy '.'; please get a newer Subversion client. Using DarwinPorts to update the subversive client to 1.6.3 fixed this issue, but installed it into a different path (/opt/local/bin instead of /sw/bin or /usr/bin).
  13. Eclipse looks better on Mac than on Linux; however, I recently stumbled across a great tip for making Eclipse waste less screen space under gtk on Linux. Highly recommended bit of gtk hackery - one file makes a world of difference!
Do you have any other tips for Linux or Windows people, surviving the transition to Mac OSX? Is there any way to tell OSX to use Windows or Linux keyboard defaults so I don't have to retrain myself?


My love-hate with SVN, Part 4: Corrupt Metadata & Going Over Quota

Ever had one of those days where nothing seems to work? Most of June's been that way for me...

This week I decided to trust my OS and let Fedora update me automatically to the latest release, Fedora 11. I've never tried a distribution upgrade; in the past I've only ever done a clean install (be it Windows, Ubuntu, MEPIS, AntiX, or Fedora). But I figured if @dougschaefer could do it, so could I.

It was fairly smooth sailing, though the handy gui tool preupgrade only downloaded packages but didn't do the upgrade, so on reboot (still in F10) I had to run preupgrade-cli "Fedora 11 (Leonidas)". I suspect I must have fatfingered my hard drive password when I rebooted he first time because it worked like a charm the second time. Overall, way more successful than attempts so far to make a Mac Pro get Fedora'd, thatsfersure (grub, video, and network card issues, to name but a few).

Anyway, now I have updated versions of subversion and python, and as a result, my Subversive projects in Eclipse don't work. After much cursing and experimenting (and updating my CollabNet Subversion version to 1.6.3, the solution seems to be simply this:

Check out the projects anew within Eclipse, and if necessary, diff local changes from old project to new project.

But, if the project is too big (jbosstools trunk folder is over 1.1G) you may get a heap error. You can check the whole project out via commandline, but Eclipse (or Subversive? or Mylyn?) uses too much memory and the whole thing dies, despite my running Eclipse w/ a half-gig of heap:

/home/nboldt/eclipse/eclipse/eclipse -clean \
  -showLocation -data /home/nboldt/eclipse/workspace-jboss \
  -vmargs -Djava.library.path=/opt/CollabNet_Subversion/lib \
  -Xmx512M -XX:PermSize=512M

In this case, the solution is to check out the project without recursing into folders.

Commandline on in the Console view, that looks like this:

svn checkout "" -r \
  HEAD --depth empty  "/home/nboldt/eclipse/workspace-jboss/jbosstools-trunk"

You can then copy stuff you already checked out into the new target project, then refresh the project in Eclipse. Of course in my case Eclipse then thought all the files were new, so I had to Override and Update from the repo.

Another 24374 files or 1.1G to download. No wonder I went over my quota this month!

(Really, it was due to several different .iso torrent downloads for Fedora and CentOS, along with the movie Dead Alive, just in time for BLITEOTW day!)

So, unfortunately, I haven't been able to enjoy any of the 33 projects in this years' Eclipse Galileo release, unlike others on the Planet and the BirdsNest have. Hopefully next month will see calmer seas.

On the plus side now that I'm running Fedora 11, I can use Delta RPMs thanks to the yum-presto plugin... so next month's bill should be much, much smaller.


I know it's a beta, but c'mon...

In addition to crashing a few times a day, Firefox 3.5b4 has this cool overlapping button feature on its "fail whale" page:

HOWTO: Enable Firefox 3.0 extensions in Firefox 3.5 for great justice

Just updated to Fedora 11, and with it Firefox 3.5b4. Sadly, that meant most of my extensions (including mouse gestures!) no longer worked... until I found this:

In A.D. 2009,
Firefox 3.5 was beginning.
Captain: What happen ?
Mechanic: Fedora 11 repo set up us the beta.
Operator: We get signal.
Captain: What !
Operator: Main browser turn on.
Captain: It's you !!
FF35: How are you gentlemen !!
FF35: All your extensions are belong to 3.0.
FF35: You are on the way a vanilla Firefox.
Captain: What you say !!
FF35: You have no chance to survive wait for GA.
FF35: Ha ha ha ha....
Operator: Captain !! *
Captain: To put back every 'extension'!!
Captain: You know what you doing.
Captain: Install this.
Captain: For great justice.
Captain's Log - Additional: Here's the same extension for Thunderbird 3.0b2, though I had to disable the quicksearch toolbar box as it wouldn't close properly in TB3; also found some handy toolbar buttons here for filtering a mailbox for all/unread.


Workspace #fail

Another obscure and unhelpful error message that now pops up about once every five minutes while I'm working in Eclipse. Upgraded to Eclipse 3.5 a day before the official release (thanks to my Friends of Eclipse membership), but to no avail. Evidently my workspace is pooched somehow.

Why can't more error messages tell you *HOW TO SOLVE THE PROBLEM*, rather than just reporting that something went wrong? Surely as software devs we should be able to do better... if for no better reason that to avoid having to listen to end users like myself complain? :P


Eclipse 'Vote For Pedro' Plugin?

Got this today. No idea why / how. Anyone ever seen this?

WTP Cruise Control #fail #eclipse35

Nine days without a green build this close to GA? Seriouslywtfbbq, people!


P2: The Publisher

The publishment has begun [1].

Read all about this enhanced replacement for the Metadata Generator.



Eclipse Community Survey: 4 More Insights

Ian blogged 6 insights from this year's Eclipse Community Survey; here are a few more to get us to a full Top Ten list.

  1. What is your primary operating system?

    Linux is certainly a strong player in both development (26%) and deployment (40%), beating Mac (7% and 3%) but losing to Windows (64% and 38%). More interesting to me is the fragmentation of Linux, showing that Ubuntu beats RHEL/Fedora by 10% in the desktop space (development), but loses in the server space (deployment).

  2. Where do you typically go to find Eclipse-related information?

    About 2/3rds said Google and/or the Eclipse home page, which suggests that the homepage has certainly improved - but a lot of people would rather just search. However, the survey didn't mention our finely crafted, or Survey #FAIL.

  3. Are you or the organization you work for a member of the Eclipse Foundation?

    Five out of six respondents (83%) said No. So either we've done a terrible job of converting users into members, or people would rather give back in the form of testing, documentation, filing bugs, and writing articles. I suspect it's a little of both, but mostly the former.

    Kudos to the contributors, and shame on the corporate drones for not convincing their queen to send a little honey back to Eclipse.

  4. In the last year, how have you participated in the Eclipse community?

    While nearly a quarter of respondents (24%) said "I entered at least one bug into Bugzilla", more than 2/3rds said they "used Eclipse but didn't actively participate in the community." To me that's a clear sign we have more users than contributors. Is that because most Eclipse users are Windows folks who don't grok that Open Source works best when everyone sees themselves as part of the process, rather than just a consumer?

I've been reading More Joel On Software recently, thanks to winning a prize for bringing a bag purchased in Alaska to EclipseCon this past March. One article stands out from there in this context, Building Communities with Software, from March 2003. Here's an excerpt:

The social scientist Ray Oldenburg talks about how humans need a third place, besides work and home, to meet with friends, have a beer, discuss the events of the day, and enjoy some human interaction. Coffee shops, bars, hair salons, beer gardens, pool halls, clubs, and other hangouts are as vital as factories, schools and apartments ["The Great Good Place", 1989]. But capitalist society has been eroding those third places, and society is left impoverished.


So it's no surprise that so many programmers, desperate for a little human contact, flock to online communities - chat rooms, discussion forums, open source projects, and Ultima Online. In creating community software, we are, to some extent, trying to create a third place.

If you feel your third place is lacking, please consider contributing more to Eclipse, to Fedora or CentOS, to JBoss Tools, or whatever tickles your fancy. Just give something back. Your community will thank you, since, after all, "A rising tide lifts all boats."

UPDATE, 2009/05/30: Mike's right, calling our users "freeloaders" isn't fair. I just wish there was a more obvious way to convert users into contributors.


Dash Athena: Eclipse Common Build System / Running Tests On Your System

Bjorn recently kvetched that Eclipse projects met two or three of those goals, but fell down on the "common build system" and "tests run on your system" [1].

While it's true I've seen a number of projects who don't have, don't run, or don't publish their tests, I'm a little disappointed to see Bjorn's no longer committed to the common build solution we've been working on since September 2006 (in earnest since June 2008). We do have a project to solve both those concerns, but like all things at Eclipse, it's powered by YOU. You want it to happen, you have to help. I'm looking for a few good contributors and committers for the Dash Athena project to supplement the great people we already have. Or, if you don't have time to contribute code, you can help by using the system, testing it, opening bugs, enhancing documentation, and blogging about it.

So, what is Dash Athena?

Well, it's a common build system using Hudson and PDE which can also be run commandline on Linux, Windows or Mac, or in Eclipse. It can produce zips of plugins, features, examples, tests, then run those tests. It can also produce update sites with p2 metadata, which can then be published to (or, for that matter) so everyone can get your bits via Update.

Tests will currently only run on Linux - if you'd like to help us port to Mac OS X and Windows, please step up. The system works with CVS, SVN, and probably Git/Bzr/Hg too, since it supports building from locally checked-out sources and will copy your features/plugins so they're in the format that PDE requires. It supports source input via map files (soon Project Set Files (*.psf), too!) and binary inputs via zips and p2 repos / update sites.

If you aren't sure how to get started w/ an Athena build, please don't hesitate to ask. If you feel the docs are insufficient, incomplete, or inaccurate, let me know - or better - fix them! Want your own Hudson job to run your build? Just open a bug and we'll set you up.

Oh, and incidentally, the irony is not lost on me that I'm using American iconography above even though 5 of the 6 committers on the project are Canucks. :)


They're Coming To Make Me Write ASP!

watch video

Remember when you ran away
Big Blue got on their knees
And begged you not to leave
PDE'd go berserk

You left 'em anyhow
And then the days got worse and worse
And now I see you've gone
(Completely out of your mind)

And they're coming to take you away ha-haaa
They're coming to take you away ho ho hee hee ha haaa
To the Redmond farm
Where life is beautiful all the time
And you'll be happy to see those nice young men
In their - see? Sharp coats
And they're coming to take you away ha haaa

We thought it was a joke
And so we laughed
We laughed when you had said
That you could leave the FLOSS and work for Bill

Right? You know we laughed
You heard us laugh. We laughed
We laughed and laughed but still you left
But now we know you're utterly mad

And they're coming to take you away ha haaa
They're coming to take you away ho ho hee hee ha haaa
To the happy home with bugs and Vista and viruses
Security "fixes" which patch and patch and open new hacks and holes
And they're coming to take you away ha haaa

We've read your blogs
And used your code
And this is how you pay us back
For all our kind unselfish, loving deeds?
Ha! Well you just wait
They'll find you yet and when they do
They'll make you write with
You well-dressed geek

And they're coming to take you away ha haaa
They're coming to take you away ha haaa ho ho hee hee
To Camp Microserf where life is beautiful all the time
And you'll be happy to drink that nice Kool-Aid
In their clean white cups
And they're coming to take you away

Neuroticfish - They're Coming To Take Me Away


Use Your Metadata, Vol. 2 [Update]

Wednesday I went off on a bit of a G'n'R-fueled rant about metadata, documentation, and the shotgun blues. Today, I'd like to focus on something more positive.

As Pascal blogged the other day, the new p2 is almost done and is ready for tire-kicking. Some new features I personally like include:

  1. a new p2.director app / task, which includes support for installing multiple IUs (feature.groups) in the same step and finally has commandline help
  2. a new p2.repo2runnable ant task, used to convert an update site zip to the old-school unpacked "runnable" features/plugins format so that one day we will be able to throw away all those extra zips.

    UPDATE, 2009-06-02: repo2runnable now works as a commandline application too, thanks to Andrew's fix. Wiki updated.
  3. Composite Repo, Mirroring and Slicing Tasks - haven't tried these yet, but they look like they'll be very handy for one day replacing the hack that is for our Modeling Project composite repos with something more robust and easily maintainable.

I'm also impressed that there is new, current documentation regarding the above tasks, as well on the new Publisher which replaces the Metadata Generator.

Will this release be p2's salvation?

click to zoom

Well, I'm split on the new default behaviour in the update UI, such that when you add a new update site p2 won't by default search ALL your other listed sites. This is a great performance gain if you're installing a new self-contained feature, but a pain if you're installing something like VE which depends on EMF and GEF, and you don't already have those deps installed. Simple workaround is to just pick the "all sites" entry in the dropdown.

I'm also waiting to see if there will be something better done about recovery from slow/incomplete mirrors.

But other than these minor concerns, I'd say YES. With lots more commandline and ant toys available, p2 is certainly maturing. And with more people adopting its use and spinning p2 repo zips, more testing is being done, and more use cases are being covered.

So... get in the ring, and go a few rounds with p2. It's worth the battle. :)