Pivot tables are a a growing staple for analysis in excel yet they remain limited to the functionality which Microsoft has chosen to include. Typical operations are the inclusion of filters, choice over rows, columns, and maths operations. In R this functionality is not always so straightforward. That is until you try sqldf(). The sqldf package implements sql (Structured Query Language) query functionality onto traditional data.frames. A simple example being:
Okay so its been a while since the last post and as you can imagine a lot has changed. Starting with the major updates, I have now completed my Masters of Research qualification at UCL’s CASA and am on the verge of beginning a PhD with UCL’s SECReT (a perfect acronym with no known explanation). Unlike most PhD’s however, the SECReT PhD will take four years completing an M.Res in the first year before embarking on the three year research project.
When we consider traditional cartograms, often our first thoughts are towards maps of poverty or wealth. Furthermore, we very expect to see a vector or raster based representation rather than a real world image. This post explored two aspects of this novel visualisation technique.
Firstly, we can consider the creation of the cartogram. For simplicity’s sake, the dataset used is the simplified world outline provided by www.gadm.org/ and for our imagery we use one of the NASA Earth Marble Images downloaded in the geotiff format. Using the Cartogram Geoprocessing tool version 2 available at http://arcscripts.esri.com/details.asp?dbid=15638 we are able to perform the cartogram function on the global boundaries vector layer and in parallel any additional spatial layer. The image below being the population cartogram marble. One issue with the cartogram tool is that the skew of the image morphs the image away from its simple rectangular format so the image must be slightly cropped for the benefit of the 3D conversion.
The 3D version of the globe is created using Blender, a free and open-source 3D modelling package with many comparable features to commercial packages such as 3DsMax. Once the cartogram texture is created, it can be mapped to a sphere mesh and subsequently through the use of keyframes can be animated to spin in a smooth and earth-like manner. Further details will be provided later through a tutorial. Further reality could be added to the map though the addition of night lighting and clouds giving the earth a more realistic appearance.
As the title suggests, I was only able to attend the second day of this years Geo-12 event. However, the talks I did see where of excellent standard and I will hopefully communicate some of the key ideas here. The talks where titled:
1. 3D Laser Mapping – Mapping vs Survey Grade by Andrew Fuller
2. Ten things not to do with maps by Mary Spence
3. Everyone can be a hydrographic surveyor by Tim Thornton
The 3D Laser Mapping talk though not my main focus proved to be very informative and highlighted the development of terrestrial laser scanning. The first point was that survey grade accuracy is typically <3cm whereas mapping grade is <100cm. The talk also reiterated the importance of mission planning when conducting high accuracy surveying. Due to the varying positions of the different positing systems during the day it is important to conduct a survey when the greatest number of satellites are overhead. This is even more important when working in urban canyons where aerial visibility is severely limited.
Andrew Fuller talked in some detail about the use of Inertia Monitoring Units IMU which work to correct position whilst allowing the scanner to know its position even when satellite coverage is inconsistent. To see the results of the surveys have a look at the video below. It is quite easy to see how this could be transferred into the games industry to create real world places.
The next talk was one with a balance of humour and purpose. Mary Spence gave a rapid and well illustrated sample of common cartographic mistakes and possible solutions. Her talk worked through the most common ailments including colour, labelling map furniture etc highlighting correct vs incorrect use.
As was highlighted, many of the suggestions made where common sense however, commonly forgotten. I will give a few examples of Mary’s recommendations below.
1. Map Content
The final point of Mary’s talk was on the subject of map content. All too often a cartographer seeks to embed far more information onto a map than is required and will typically lead to congestion and confusion. Mary highlights the importance of planning and the consideration of output type and size, purpose and user. She used the example of one customer who used the phrase “It seems a shame not to include it” having digitised more data than required.
Again a common mistake with map makers is the use of text such as “Map of….”, “Legend showing..” and “This map contains…”. In the majority of cases these statements are redundant and remove the professional impression of the map. If it requires the designer to tell the reader it is a map surely it is not a very good map. Additionally when a designer uses arty borders, north arrows (a topic of their own) and insets then the question must be asked is the designer trying to hide something about the actual maps quality.
3: Do not lie
This line was shown alongside the Kingston university 3D campus map which the university paid a large amount of money to complete. The project included the use of helicopter surveys yet the final product is over generalised and as such is a very poor abstraction of reality. This was one of the motivations of my past lecturer Dr Ken Field to get students to create a more appropriate map.
The final talk Everyone can be a hydrographic surveyor by Tim Thornton was possibly one of the most significant. Tim Thornton presented the work of TeamSurv, an open-source bathymetric mapping project. The project aims to collect large amounts of depth data from yachts and cruisers and to collate and map the data. The project appears to be well engineered and takes into account many of the questions regarding data quality that may be asked. Firstly, all data submitted is collated into a database and a certain density of recordings is required before a depth is placed on the map. Through the use of regression analysis all depths may be plotted and those which are beyond a sensible range are excluded. User who regularly submit poor data are blacklisted or asked to recalibrate equipment. The next portion of the project is the correction due to tidal heights and sea temperature. The project takes into account sea temperature on the speed of sound/light in water and uses almanac data verified alongside live tide sensors to make all results consistent.
The majority of data collection to date has taken place within the Solent, however as the project grows they are now collecting data on a global scale. In addition the project has had some of their results in the Solent verified through wide beam sonar bathymetric scans. These professional surveys suggest the TeamSurv data is correct to +- 10cm. Coverage within the Solent is now good however, there are a number of blank spots at shallow regions.
I think this project is very interesting and is certainly one to follow. As the project expands and more users contribute it will be interesting to see strategies designed to increase coverage and infill certain parts of the dataset.
The map that changed the world written by Simon Winchester is a fascinating example of the early 1800 period where map making was in its element. The book follows the life of William Smith the son of a blacksmith who may be considered the founder of Geology. For his entire life smith fought to be recognised and in fact to stay in the black.
Smith spent the majority of his life in the pursuit of the map and is known to have travelled extensively through his life. His greatest hindrance was a lack of finance and as such had to fit his work around his job as a land drainer and engineer.
The book tells of the sampling that smith conducted through his life and how he accessed funding through numerous publications and favours. The book also tells of the darker side of the scientific community of the time who seemed intent on plagiarizing both his work and ideas. This disrespect for his work led to many of Smiths setbacks during life and later to a short stint in debtors prison. Smith did however eventually receive the credit he was due and receive the first Wollaston Medal for his lifetime achievements.
Smiths final map of the geology of the England as shown below is testament to the skills he has gained during his life in the measurement and depiction of the geology of the country. Though a reasonable number of the final maps where produced, few are now left in regular circulation though one may be seen in the offices of the royal geological society.
The moral story of the map is as powerful as the creation of the map. Holding a lifetime passion for a subject and being able to step back up to the mark when you and your work are mocked will allow you to create and achieve great things. During his life William smith had to fight for any recognition with the aristocrats of the time yet he is now considered one of the greatest men in the history of geology.
Just My Type, written by Simon Garfield published September 2011. When you first see this book the sleeve is enough to make you pick it up. Targeted at the novice level, the book draws the reader into the world of type working through its history and exploring its meaning and role in history.
A particular feature of the book is the regular assessment of some of the most commonly used.. and misused fonts. Of particular interest being the fonts Gill Sans (p.48) and Comic Sans. Though the book does not dissect fonts in the same was as some of the more traditional books, it does lead the reader to contemplate their font of choice and maybe even to appreciate and understand why or what different fonts mean what they do.
Just for those who really want to see a bit more on the anatomy of font you could start with the graphic below.
From my perspective I feel that typography plays a significant role in the world of cartography and data visualisation for a number of reasons. Firstly as a cartographer you are working with a range of different font sizes, purposes and colours. Through a knowledge of type one is able to engineer these fonts to work in harmony and to make the information as clear and useful as possible. In addition through the use of appropriate font style the cartographer can influence the perceived purpose of the map. An elegant gothic font may promote an historical or artistic map whereas a clean sans serif may suggest a purposeful information map.
The visualisation of flows has been approached in many different ways through the history of cartography and mapping with early examples such as the British Coal export Map by Charles Minard. For this post I will be looking at the flow of London Barclay’s bike hire scheme. The first visualisation as displayed below uses a combination of base map and individual markers for each bike displayed through time.
This first visualisation allows the user to reference the location of individual bikes however, an error within the processing code resulted in all bikes going to incorrect destinations. Having solved this problem, the next sketch removes the basemap to provide a less congested visualisation. The white points indicate bike journeys and the red flashes indicate the start of a journey.
This visualisation is an improvement on the first but the mix of colours does not work as well as it could. This is addressed in the final visualisation below where the trip beginnings flash white. Though simple the effect through the video is to see the concentration of starts moving from the outside zones into the centre and about 5pm the reverse with many trips initiating in the city of London region.
The final product works well and indicates and shows the movement of bikes through the day. Problems with the visualisation however are the lack of a spatial reference to orientate the user and also an indicator of the total number of bikes in use at any time.
Map Of A Nation: A Biography of the Ordnance survey by Rachel Hewitt is a excellent historical account of the history or the ordnance survey. Hewitt works well to fully document the various people and actions which in combination created one of the greatest maps in the world. Hewitt speaks in details as to the measurement of the first base line at Hounslow Heath and then consequently to the national triangulation and the challenges which this ensued. Particularly impressive is the mounting of the great theodolite (built by Jesse Ramsden) above St Paul’s Cathedral.
The book further describes the challenges faced by the ordnance survey in the sense of the Napoleonic wars and the continues distractions which they faced: In particular completing a full map of Ireland before completing the original First series, Measuring the distance between the Greenwich and Paris Observatory and mapping Jerusalem.
Map of a Nation
The book concludes with the completion of the Ordnance Survey First Series on the 1st of January 1870; 123 years after it was started. During this time, the ordnance survey had been through 5 Directors and had moved from the tower of London to nearer its present home in Southampton.
For me, this book provided a wonderful insight into the development of both the map and also of the country. The changing nature of demand can be fully appreciated and is well illustrated throughout the text. It would be interesting to see what General William Roy, Major-General William Mudge or Major-General Thomas Colby would say if he where to see the present incarnation of the Ordnance Survey.
Now in its second decade, Google SketchUp has certainly come of age. In a past post I looked at importing SketchUp models into ESRIs ArcGIS and now a look at working with Lumion 2. This demo is looking at an indoor scene aiming to introduce the viewer to an office and environment whilst highlighting some of the work being conducted.
The first image is a view of the office as it is being created in Sketchup and it is evident that the overall image quality is good but nowhere near realistic. Though this may be the case, the modelling process is so efficient that this was created in under and hour.
The next phase is the export from SketchUp and import into Lumion. This process is particularly simple and relies on the use of the COLLADA file format. Using the menu File > Export > 3D Model. In the save menu it is then important to set save options to include 2 sided faces, export edges and triangulate all faces.
Once in Lumion the import process is fairly simple. In the objects menu select import file and then select the COLLADA file. Important consideration at this point is whether to create new textures with Lumion or use the default Google SketchUp.
The video above has been created using the Lumion video engine and exported at 720 dpi. The video maker allows a simple screen-grab based approach to creating a path and then intelligently renders a path between them. The video editor has significant functionality and allows people, vehicle and object animations to be included.
A few faults evident in the video was the flickering on the Mac and tv displays which was a result of multiple textures on the same plane. This may be alleviated by lifting the primary texture.
Processing is an open-source graphic, animation and interaction generating applications which allows a code based environment for development. The example displayed in the video below visualises tweets around London and allows the user to navigate and interrogate the data.
As is evident in the video the main aspect of this code is the keyboard based interaction of the scene. Panning around the scene is available with the traditional A,W,S,D navigation scene and zooming and tilting available using ‘z’ and ‘l’.
The navigation is created through the use of the processing keyPressed() function. This code is in its own .pde file within the main sketch to simplify the sketch structure. In order to create the pan and zoom feature, the camera function is used.
By setting both eyeX and centerX (and the same for Y) to be the same variable the camera will always remain directly above the image. Consequently by adding or subtracting from the cameraX and cameraY variable the scene appears to move.