One, they should open source their Android app, so people can improve it.
Two, when a mobile app checks in, it should also send the mac of the wifi it's connected to, the mac & signal strengths of all the other wifi's it sees, the cellphone tower id the phone is connected to, the signal strengths of all the other towers it sees.
Three, the app should be able to search for locations based on that data. Why wait for a GPS lock and/or a couple of round trips trying to turn that information into a latlon and THEN search for locations near that latlon. Cut out the middleman data.
Four, the app should keep a local cache associating wifis & tower ids to checkin locations. When my phone can see the local cafe's wifi access point blazing away, it should be able to instantly display the location I checked into the last time it saw that AP, without having to wait for a couple of network round trips and searches from the 4sq servers.
Five, the Android app should integrate with the Map app just like Google Latitude does. When I bring up Maps, there should be an overlay named 4sq that has pinpoints for all my friend's checkins in the last couple of hours.
Six, they should start contributing their mapping information to OSM. Or at least give the user the option to click a couple of checkboxes "Do you want to contribute your information to OSM?"
Seven, they need to work with organizations and with serious crowdsource volunteers better. For example, make it easy for an airport authority, a university, a mall, a public transit org, etc to create the canonical entries for airport gates and terminals and shops, for buildings and classrooms, for mall locations and shops, for bus and transit stops.
import foo.bar.*And then starts just using the symbols defined in foo.bar and foo.baz willynilly. This is *stupid*. The point of example code is to teach me. Doing this in example code does not teach me which symbols are in which package, nor does it teach good form or proper idiom. The only thing it teaches is rage against who ever wrote the examples. Most all of the teaching docs and example code for Java are worst chronic offenders.
Many projects practice "open source" via the "release the source" technique. Sometimes, in the process of being underwhelming, what is "released" is a bare tarball that lacks build instructions, and metadata such as change history, internal documentation, and bug and feature commentary.
This is not a "best practice".
A slightly nicer way to do it is to maintain a read-only repository, such as a public SVN or GIT server, and on occasion write out an approved and sanitized version of the software for the great unwashed to pull down.
This looks nicer than just bare tarball drops, but actually isn't any better.
The next step is "public development". This turns software development into a public performance. It turns out that the proverb "Sunlight makes the best disinfectant" is true for software quality as well as for politics. Such projects keep their real operational version control system world readable, and keep their bug tracker and development mailing lists and documentation fully public. Sites like GitHub and Launchpad make it trivial to do this.
There are some costs a project has to pay to make this work. They have to make sure that "tip always passes", e.g. that they have a good enough test suite, continuous integration system, and merge processes. But, consider: any project, open source or closed, that doesn't have these things is unlikely to be generating high quality work at all.
The next step after "public development" is open development. Such projects accept participation and contribution from "outsiders". When fully expressed, there is no such thing as an "outsider", everyone is a contributor.
The Drizzle project regularly gets people asking what they can do to get involved in the project.
One very easy way to brush up on your C++ skills and dip your toe into our open development process is to fix minor warnings.
We are very proud that Drizzle builds with zero warnings for with "gcc -Wall -Wextra".
Go to one of those pages, pick a build log off the build history, find a warning that you think you can fix, and then ask us in the #drizzle channel on Freenode how to send your fix to us.
After you've done that a few times, you'll be ready to fix some low hanging fruit.
We've had people graduate from this process into becoming a Google Summer of Code student, and eventually having a full time paying job hacking on Drizzle and other open source software.
And it all starts with writing a simple warning fix.
(originally posted 2011-03-03)
Twenty years ago, if I wanted a reasonably fast data connection between a computer in Seattle and one in San Francisco, I had to call The Phone Company. Contracts would be negotiated and signed, Purchase Orders would be sent, Expensive Machines would be shipped, Work Orders would be generated and executed on, and well-trained well-paid Union Men would provision and test the link, which would be from a specific geographic point, to another specific geographic point. And I would be presented with a monthly Expensive Bill.
Ten years ago, that started to change, dramatically. All that complex hardware, cabling, installation, and cost-recovery got abstracted away by TCP/IP. Today, to get a much faster and much more flexible connection, I just click on a hyperlink, and or start a VPN, and I have a connection that lats a few seconds to a few hours, for just as long as I need it, and then the underlying real hardware forgets completely about me and my data, and gives some other random person the link they need.
Ten years ago, if I need to run "back office" software for an company, or if I wanted to run a web site, I would again have to do the whole Contracts, Purchase Order, Expensive Machine, Work Order and so forth. And again, there would be a big monthly bill, plus a big capex spend too.
About 5 years ago, that started to change, dramatically. All that "stuff" got abstracted away. With the type of a command, or the click of a UI button, machine instances spin up to do my work, and when I am done with them, the underlying real hardware forgets completely about me and my workload, and gives some other random person the work they need.
Cloud computing is to computing, what the Internet is to telecommunications.
(originally posted 2011-03-13)
When you surrounded by something, that something eventually becomes obvious, then it becomes assumed knowledge, and then finally it becomes invisible.
I have been involved in Open Source since the late 1980s, and it is sometimes hard for me to remember that is what is obvious and assumed by someone in my position, is not obvious to everyone.
Early this week, I received an email from a group who are literally half a world away from me, physically and culturally. They use Eucalyptus, and had a question about it. Specifically, they wanted to know if they could modify it, if they could add modules and features that they needed.
Like I said, what seems obvious to me, is not obvious to everyone.
I wrote back to them, and explained that Eucalyptus is "Free Libre Open-Source". Most of it is licensed under the GPL Gnu Public License. That means that they can download it for free, can try it for free, run it on as many machines, with as many users, for as long as they want, for free. And yes, they can also look at the source code, modify and patch it, add more code, and write new modules for it, without having to get permission from anyone.
CPAN and /usr/bin/cpan as installed by native package management apparently do not work out of the box on stock MacOS, on stock Solaris, on stock Illumos, on stock Ubuntu, on stock Debian, on stock Fedora, on stock RHEL, or on stock CentOS.
Perl is now worse than the various implementations of the JVM at assuming that an entire box is going to be turned into a "Perl machine" and that the admin of that machine has any interest in keeping track of all the ways that Perl is special, and in all the ways that Perl wants to make that machine "special".
I mostly gave up on Perl about 3 years ago, and about once a year I go back and give it other try, and each time the experience is worse.
- Take on as little student loan debt as possible. And if someone will not pay you to get a post-grad degree, don't waste the debt and the time. Keep out of debt. You never want to feel stuck somewhere so to make rent and pay bills.
- Learn to write. You learn to write by writing. Take writing classes, read about good writing, and practice writing. It doesn't matter what kind of job you get or life path you take, you need to know how to write.
- Get involved in some open source projects, and make real contributions to them. The Google Summer of Code is a good thing to get involved in. A portfolio of demonstrated contributions to open source projects is more impressive than a GPA on a new resume.
- Get involved. Find your local makerspaces, hackerspaces, and barcamps. Volunteer and participate. Go to Ignite. Speak at Ignite.
- Always be fluent in at least two programming languages, and practice learning new ones. Languages and frameworks come and go, learning new ones is forever.
- When getting a job, beware of the non-compete and copyright assignment clauses in the employment contract. Push back on them. If they are non-negotiable, too onerous, are enforceable, beware and be careful of taking that job. Keep your list of "personal and outside projects" ready to attach as an appendix.
For years I have been wishing for a "Netflix for Books", for physical books.
Here is how I envision it working:
A large municipal library, or a consortium of them working together, set up a site and paid service very similar to Netflix, only for books.
I, as a user, select how many books at a time I want to rent. There would be different monthly payment levels, just like Netflix.
Books in my queue get checked out from the library or via inter library loan. They get mailed to me, along with a return mailer. Postage would be USPS Book Rate, of course. Unless I am willing to pay extra for Priority or Express mail.
I read the book, keep it for as long as I want (maybe with a one year maximum), and then either return it with the return mailer or by dropping it off at the library like a regular book.
I could keep using my muni library "for free", or use this service for the convenience factor. It could even be a source of much needed funding for the amazing public library system that we all too often take for granted, and do not use enough.
I want this.
And the problems with autotools are getting worse, because it itself was never designed to have cleanly portable control files between versions. Back when most everyone just FTPed down a tarball, and then ran the prebuilt ./configure, it worked pretty well. Now that people pull a projects raw repo over SVN, BZR, or GIT, and then have to run liptoolize, aclocal, automake, autoconf, etc themselves, and who knows what version of autotools is locally installed, all hell breaks loose.
With respect to all the autotools replacements, such as cmake, Ant, etc, and all the other ones mentioned in Eric Raymond's recent blog post: they ALL are some multiple combination of horribly slow, enforce their own special "one true way" of laying out source trees, are specific to what languages they will deign to handle, have abysmally bad error messages when there is a problem (and being worse than autotools in this respect is an amazing achievement), require the installation of a JVM and a huge pile of buggy poorly documented class files, require the installation of a huge pile of buggy poorly documented Python modules, require the installation of a huge pile of buggy poorly documented Perl modules, cannot intelligently detect and handle optional build dependencies, cannot cross compile, cannot build out of a read-only source tree, cannot build out of tree, and/or cannot build shared object files.
I wish to gods and monsters that this was not true, but it is. And until the writers of the competing build chain systems understand why all this stuff is important, and are willing to support it, autotools will stick around, and people will continue to use it.
This is not to say that it cannot be used better. One of Monty Taylor's herculian tasks the Drizzle project has been pandora build, which is a refactoring and rewrite to the years of cargo-cult accumulated cruft that has infested most autotools based open source projects.
It's worth reading Shrew's original blog posts, and then trying it out.
I would love to see some work done on how well libmysql+mysqld, libdrizzle+mysqld, and libdrizzle+drizzled handle highly concurrent asynchronous event-oriented workloads such as those generated by all these new node.js applications.
I suspect that all sorts of surprising bugs will be discovered.
Please help us discover those bugs.
Jenkins is a pretty standard Java-based web app. The configuration settings are stored in XML files, and that configuration is manipulated using an "easy to use" Web GUI.
The "old skool" UNIX-like way to keep configuration settings is in a text file, which is edited with an ordinary text editor, and is read by the program daemon on start or SIGHUP. This is considered "scary", "hard to learn", and "hard to use" by novices.
There is a big problem with GUI-only managed configuration, an issue where text file configuration has a major advantage.
I did not set up the Jenkins server or nodes. I am not the only person with admin access to it. Several other people have set it up, set up various projects in it, and added new nodes and new types of nodes.
As I work on it and look at the existing configuration, I often find things that are "surprising", things that make me say "Is that right? That can't be right? Can it?". And then I have to spend time digging into it. Sometimes it IS right, for reasons I didn't know at that moment. Sometimes it used to be right, but isn't necessary any more. And sometimes, it just wasn't right.
In a textual configuration file, you can put comments. The purpose of a comment is to communicate into the future, to tell those who came after you (including your future self) what you were intending to do, and why you selected some "surprising" option or way of doing things.
There is no good way to put comments into GUI or WebGUI configuration, even if it has a freeform field labelled "comments".
Update, this post is being discussed at Reddit.
Update, this post is being discussed at Forrst Podcast ep 122.
This is actually one of the first and greatest innovations Linus Torvalds achieved with Linux. While the GNU folks developed the idea of free software, they limited themselves by sticking with a just a tight group of core developers, who largely would ignore suggestions and patches from outside. It was Linus' belief that everyone had something valuable to contribute that led the style of open-source development that now dominates the world.
It is maybe an excess of hope to say it "dominates the world".
There is still entirely too much "fauxpen source" software, where corporate dev teams or small mentally incestuous groups emit "releases", while sitting behind legal and social barriers to outside contribution and feedback. The tightly coupled development process results in tightly coupled and poorly documented software, which is hard for "outsiders" to contribute to, and the resulting feedback loop spins around until the software is so knotted up that useful development basically stops. (Of course, closed source software has the same problem, only worse.)
The open development open community model does avoid this problem, and so it's use is spreading.
Twitter has been just as "integrated" with Android for quite some time. And all Twitter Inc had to do was write an app, that used the standard Android API hooks.
In fact, my own Android phone doesn't even use the "official" Twitter app (which I find to be slow, heavy, and poorly done), it uses an even better and more advanced one called twicca. And the author of Twicca didn't need to "partner" with either Twitter Inc or Google Inc. He just wrote a better app, that again just used the public APIs.
It doesn't take a high-level corporate partnership, a 12 month product roadmap, and a heavyweight development cycle by employees of a set of large companies to add a deep and useful integrated feature to Android. It takes a single developer working in a cafe.
Now that the Motorola Xoom is out, and after playing with the first cut of the Android 3.0 Honeycomb running on it, I knew that it was time. And since I didn't enroll for Google I/O before it sold out, I was probably going to have to buy it.
So I did. I ordered it from Amazon, requested free 3day Prime shipping, and was expecting it on Wednesday. It arrived today.
The unboxing went pretty quickly. There is the unit, a USB cable, and a power cable. And a few dozen sheets of regulatory instructions. No "user manual". Fortunately, while the Honeycomb UI is different from the past Android UIs (stock, GoogleTV, Sense, Touch), it's not too much different, and I was driving it after only a few minutes, and after all my apps synced up, I was using the calendar, gmail, chat, and Kindle, and such.
I think this is going to be me default Kindle device, and I am going to be doing a majority of my email reading and much casual web browsing with it.
I have only two complaints so far:
One is the standard annoyance of having to connect all the apps that depend on back-end services back up to their accounts (Kindle & Amazon MP3 to Amazon, Facebook, Foursquare, Twitter, etc). Android really needs to have a standard keychain service, and then sync it back to the G mothership like it does everything else.
The other, continuing annoyance, is the power/charging cable uses a proprietary connector. Now that the world is quickly converging on micro-USB for charging phones, after 20 years of power adapter hell, it's looking like the hardware manufacturers are doing the same stupid thing again with tablets. This is going to be Yet Another power connector I will have to pack into my gear.
Other companies copied this idea, since it saves money. However, they would "forget" to call back, or would drop you to the back of the queue after all the people who were holding, or they would call back at lunch time or right before the end of office hours.
That most large corporations cannot be trusted regarding so simple as returning phone calls, even when it's in their own direct financial interest, is a demonstration of exactly what is wrong with corporations. Or maybe of structured organizations of humans in general.
They all suck, and emit crap gigantic SVG files.
Here is the way to do it that make perfectly covering SVG files that don't suck.
gs -dBATCH -dSAFER -dNOPAUSE \
If you have an Adobe Illustrator AI file instead of a PDF file, just put a PDF filename extension on the end, it will work fine.
mv Logo.ai Logo.ai.pdf
gs -dBATCH -dSAFER -dNOPAUSE \
pngcrush -v \
-rem text \
-l 9 \
mv Logo.crush.png Logo.png
If you want to shrink the image to half size, change the -r72 to -r36, and so forth.