Tuesday, December 30, 2008

Man Guilty Of Killing Moose Just Inside National Park Boundary

Newsminer.com in Alaska has a story about a hunter that was convicted of shooting a moose just inside the boundary of Denali National Park. The location of the park boundary and how well it was marked on the ground were important elements of the case.

I know from personal experience that United States National Forest Boundaries, Wilderness Boundaries, and National Park boundaries are not always well marked. I'm not making any determination of the hunter's guilt, I'm just pointing out my own personal experience with one issue in this case.

Goes to show that location matters.

Here is a link to the article:

http://newsminer.com/news/2008/dec/05/jeff-king-pay-fine-restitution-denali-park-illegal/
Posted on 4:20 PM | Categories:

"Java Platform Performance" and OpenJUMP (Scalability)

I recently purchased the book “Java Platform Performance: Strategies and Tactics” and I have been reading it here and there. In Chapter 1 the author identifies four areas of a program’s performance:

  • Computational Performance
  • RAM Footprint
  • Start-Up Time
  • Scalability

We all know OpenJUMP has some issues with RAM Footprint, especially on older computers. The architecture of Eclipse was especially designed to have good performance when it comes to Start-Up Time. However, one aspect of performance that I never really thought of was scalability. Here is a snippet from Chapter 1 where the author provides a nice description of scalability:

“Scalability, the study of how systems perform under heavy loads, is an often-neglected aspect of performance. A server might perform well with 50 concurrent users, but how does it perform with 1,000? Does the performance of the server drop off gradually, or does it degrade quickly when a certain threshold is reached?”

You might not think that scalability would be a big issue for OpenJUMP. After all, OpenJUMP is a desktop application designed for a single user, not a client-server application designed for hundred or thousands of users. Still, scalability might creep into the performance equation for OpenJUMP in unexpected areas. One example might be the number of layers in a Task. Your plug-in might work great if the current task only has two (2) layers, or five (5). But how does it perform when there are 50 layers, or 500? If I remember correctly we’ve had bugs in OpenJUMP that only show up when there are a large number of layers present. I’m sure there are other examples of scalability issues in OpenJUMP.

I look forward to learning a lot more from the Java Platform Performance book and to applying this new knowledge to OpenJUMP.

The Sunburned Surveyor

Posted on 12:56 PM | Categories:

Wednesday, December 24, 2008

GIS Icons to Complement Tango Icon Set

A couple of days ago I posted a link to the Tango Desktop Project, which aims to design an icon set for open source desktop applications:

http://tango.freedesktop.org/Tango_Desktop_Project

A post to one of the OSGeo mailing lists revealed a related project for GIS icons in desktop applications related to the Tango Desktop Project. You can find the web page for this icon set here:

http://robert.szczepanek.pl/icons.php

The Sunburned Surveyor

Monday, December 22, 2008

Pirol Plug-Ins For OpenJUMP

Larry pointed me towards some of the Pirol plug-ins for OpenJUMP today. I don't think the Pirol project is heavily involved in OpenJUMP programming any more, but you can still get their plug-ins online. All of the plug-ins appear to be released under the GPL, and the source code is included with the JAR file downloads. You can get these plug-ins for OpenJUMP here:

http://www.al.fh-osnabrueck.de/jump-download.html

Most of the page is in German?/French? but there are short English descriptions of each plug-in.

Some of these plug-ins look quite useful, and I may start to host the source code and executable form of these plug-ins at the SurveyOS Project.

The Sunburned Surveyor
Posted on 1:01 PM | Categories:

Tango Desktop Project - Icons For Open Source Apps

I just stumbled upon this project after a post to the OSGeo mailing list:

http://tango.freedesktop.org/Tango_Desktop_Project

The project aims to develop a set of icons that will allow open source applications to have a consistent look & feel. The project includes a base icon library, and some icon design style guidelines.

I'd like to consider using the icons in OpenJUMP, and will be looking for ways to use them in other software that I develop.

The Sunburned Surveyor
Posted on 7:13 AM | Categories:

Thursday, December 18, 2008

Google's Guide To Writing Testable Code

From Jon Aquino's blog comes this post about Google's guide to writing testable code. I skimmed over the first section "Flaw #1: Constructor Does Real Work" and it sounded logical to me.

I thought I would share it with others, in case you aren't reading Jon's blog (and you should be):

http://misko.hevery.com/code-reviewers-guide/

The Sunburned Surveyor
Posted on 3:57 PM | Categories:

Distributing GIS Data: Best Practices

My friends in the great white north (Canada) are really way ahead of the game when it comes to GIS. They did fund the development of JUMP, after all, and they are at it again. They've released a guide to the dissemenation (or distrubution) of GIS data. This is another document I hope to read when I get some time.

I wish the federal government in the United States would release such a comprehensive policy document.

Here is the link to the PDF:

http://www.geoconnections.org/publications/Best_practices_guide/Guide_to_Best_Practices_Summer_2008_Final_EN.pdf

The Sunburned Surveyor
Posted on 12:07 PM | Categories:

WGS84 Technical Report

Here is a link to a technical report from the National Imagery and Mapping Agency that talks all about WGS84. (World Geodetic System of 1984):

http://earth-info.nga.mil/GandG/publications/tr8350.2/wgs84fin.pdf

I hope to read trough this report on one of my upcoming vacations, since WGS84 is such an important part of what I do as a surveyor and GISer.

The report looks like it can be understood by most people, without a need for an advanced degree in mathematics or rocket science. :]

I hope others will find it useful.

The Sunburned Surveyor
Posted on 12:04 PM | Categories:

Monday, October 20, 2008

A FeatureCache (Almost) For OpenJUMP

For most of last week and this past weekend I have been sick with the flu. Since I was stuck at home without a great deal to do I spent some time on a programming challenge related to OpenJUMP that I have wanted to tackle for a long time. (I made some attempts at tackling this challenge before, but they weren’t successful.)

The challenge has to do with the way OpenJUMP manages Feature objects. All of the features in a data source (like an ESRI Shapefile) are currently read from the data source and put into a computer’s Random Access Memory (RAM). This has some advantages over the alternative ways you can access a data source, including faster operations on the features and the ability to overcome some limitations when writing modifications to the features in a data source back to the data source.

However, this approach also has some limitations. Every computer has a limit to how much information it can put into RAM, and most computer operating systems will only give a program like OpenJUMP a certain percentage of the available RAM. It is quite possible (especially on older computers) to run out of RAM when working with really large data sources in OpenJUMP.

One possible solution to this problem is to move your large data sources into a database like PostgreSQL or MySQL, and then to use OpenJUMP to connect and view this data. This approach comes with its own technical challenges, the least of which is not the requirement to install and operate an relational database.

Another solution that has always interested me is the idea of a Feature Cache. In this solution Features read from a data source into OpenJUMP are not kept entirely in RAM, but are left on the hard disk. This would allow OpenJUMP to (at least in theory) work with some very, very large datasets, even on old computers. This isn’t a perfect solution. Operations on features in a Feature Cache will be slower (I’m not sure how much slower) than it would be on features stored in a computer’s RAM. This solution also imposes some limitations on the type of data that can be stored in a Feature Cache. A Feature Cache is much easier to make read-write (instead of read-only) if you put a practical limit on the size of textual (String) attribute values and feature geometries.

There is also a part of the OpenJUMP API that makes implementation of a FeatureCache somewhat tricky. The FeatureCollection interface, which a FeatureCache must implement to be very useful, defines a method named getFeatures which must return a list of objects that implement the Feature interface. This causes a problem because you need to return a collection of Feature objects from this method that are presumably all in RAM. This requirement sort of negates the whole point of a Feature Cache to begin with.

The Feature Cache solution that I worked on this past week gets around this tricky problem by using a class called a FeatureFacade. A FeatureFacade object is a proxy that forwards all of its method calls to the FeatureCache object that is its parent. This means that a FeatureFacade object doesn’t need to keep all of its geometry and attribute values in RAM. The FeatureCache can read this data from disk and return it to the FeatureCache.

My FeatureCache implements methods that are very similar to those defined in the Feature interface to pull this off. It also implements the FeatureCollection interface, which means it can be wrapped with a Layer object and displayed in OpenJUMP.

Internally the FeatureCache uses two binary data files to store its data. One is for the attribute values, and the other is for geometry values. My FeatureCache also uses indexes for each of the files and RandomAccessFile objects to make the read and write operations as fast as possible. It also keeps track of empty “slots” in both files to keep them from growing any larger than necessary.

The only major shortcoming of the FeatureCache at this point is that it lacks a spatial index. This is a challenge I may tackle in the future. I also didn't include a buffer in the FeatureCache. Some sort of buffer, like a first-in first-out que, could potentially speed up FeatureCache operations. I left the buffer out of this implementation, because of the complexity it adds.

At any rate, the guts of the FeatureCache I describe here is in the SurveyOS SVN:

http://surveyos.svn.sourceforge.net/viewvc/surveyos/java/openjump_feature_cache/trunk/src/

The code is ugly, and I’m sure it is full of bugs because I haven’t tested anything yet. But I thought I would put it online in case others were interested. I’ve got a fair amount of work left to complete the FeatureCache implementation and plug it into OpenJUMP. Still, I’m excited about the concepts and seeing how it will work. If it is successful it could make a real difference for users of OpenJUMP on computers with modest RAM. (At least for those that work with big datasets.) I'd like to do some more work on the FeatureCache when I get finished with my next release of the Super Select Tool.

Some interesting challenges I ran into (so far) while working on the FeatureCache:

- Creating an object that implements the Iterator interface that will step trough the Feature attributes and geometries stored in a FeatureCache. This Iterator had to look no different than one obtained from a FeatureCollection that stores all of its features in RAM.

- Storing a FeatureSchema object in a binary file, and restoring a FeatureSchema object form this binary file.

The Sunburned Surveyor

Preview of org.geotools.gpx2 code.

I've made some more improvements to the GPX support code I'm working on as part of an experimental module for GeoTools. Here's how the code works:

A SimpleGpxReader parses a GPX file and provides BasicWaypoint objects. (BasicWaypoint objects implement the SimpleWaypoint interface.) The SimpleWaypointToFeatureConverter class is then used to create a BasicFeature object based on the BasicWaypointobject. These BasicWaypoint Feature can be stored in a FeatureCollection and wrapped in a Layer object for display in OpenJUMP.

Most of the code described above is completed. I need to do a little unit testing and then I can make a release. I hope to get the code in the GeoTools SVN soon as well.

Future plans for this module includes support for GPX tracks and routes, not just waypoints. I'd also like to support GPX file metadata and queries, and the ability to work with some of the temporal attributes of GPX entities.

The Sunburned Surveyor

Posted on 10:29 AM | Categories:

Tuesday, September 30, 2008

Friday, August 08, 2008

Wednesday, June 25, 2008

Google Map Maker and Open Street Map

I ran across a post on the OpenGeoData blog that talked about Google Map Maker. This is a web application from Google that allows users to create vector data by digitizing satellite photography in certain parts of the world (not in the United States).

You can read the OpenGeoData blog post here:

http://www.opengeodata.org/?p=307

You can view, and try out, the Google Map Maker web application here:

http://www.google.com/mapmaker

The Sunburned Surveyor
Posted on 8:11 AM | Categories:

Wednesday, April 02, 2008

Discovering deegree

I’ve been digging around just a little bit in the Javadoc for the deegree Library API. What a hidden treasure chest it is proving to be! It makes me wish I would have taken a closer look at it a long time ago.

What have I found so far?

At least two (2) really cool things. Let me share them with you:

[1] deegree has a comprehensive system for spatial reference system transformations. You can, for example, use deegree code to convert coordinates from WGS84 to NAD 83 California State Plane Coordinates. (See the crs pacakage, coordinatesystems package and the transformations package.)

[2] deegree has the ability to read, manipulate, and write image world files. (See the Javadoc for the WorldFile class.)

I hope to combine these two parts of deegree into a little GUI tool that allows the user to transform image world files.

I’m sure there is more lurking in deegree that I have yet to discover. My only complaint so far is that they deviated quite a bit from JTS and the JUMP Feature model. This means that OpenJUMP programmers wanting to take advantage of Deegree will have some of the same compatibility issues that they would if they wanted to use GeoTools libraries. But, I don’t think deegree has quite the level of complexity that GeoTools does at this point.

The Sunburned Surveyor
Posted on 1:01 PM | Categories:

Monday, January 07, 2008

Geodata Licensing

Richard Fairhurst has an interesting post on his blog for Open Street Map that talks about some of their data licensing challenges. I think he identifies some key issues in using the Creative Commons for data licensing, which highlights some of the challenges about data licensing in general.

http://www.opengeodata.org/?p=262

I recommend the blog post to anyone interested in geodata licensing.
Posted on 7:36 AM | Categories: