Skip to main content


Showing posts from April, 2011

A new SSC package to convert numbers to text (-num2words-)

- num2words - has been posted to the SSC Archives.  It is a Stata module to convert numbers to text.  It can convert integers, fractional numbers, and ordinal numbers (e.g., 8 to 8th).  The idea for this program originated from a LaTeX report I was creating that had some code that wrote the text version of numbers into sentences, including writing the proper case text for a number if it started a sentence.  So, the LaTeX file (written via -texdoc- from SSC) had some code like: ****texdoc example sum x, meanonly loc totalN "`=_N'" loc pct1  "`=myvar[1]'" loc totalN "`r(N)'" if `totalN'>`lastN' loc change1 "increase" ****texdoc text written: tex  ` totalN ' respondents took the survey this month. tex  There was a ` pct1 ' percent  ` change1 ' in respondents who reported using incentive payment dollars....and so on **** where the macros are defined as: ` totalN ' - the total number of relevant re

Visualization of my iPhone tracking data

Last week, the internets were flooded with panic about the iPhone storing location data in a sqlite DB.  The DB (called consolidated.db) contains longitude, latitude, altitude, accuracy, and timestamp information for near by wifi hotspots and GPS locations (when maps apps are used).  You can take a look at the data stored on you by your iPhone by using the iPhonetracker app (for Mac OSX) or if you've got a Windows machine you can find the conslidated.db file and load it into a database software. I used the program to visualize the data on my recently purchased iPhone 4 (I had a 3G until March, so I've only got 1 month of data to visualize--which in a way makes it easier to see how much tracking is really going on since I can easily recall where I've traveled in the past month).  Here is the graph that iPhonetracker shows: The advantage of this tracker app is that it visualizes the movement over time.  However, it's difficult to see the locations o

-writeinput- available from SSC

-writeinput- was recently posted to the SSC Archive[ 1 ][ 2 ].  I've written a bit about it before .  My announcement of this program focuses on using this program for creating a self-contained dataset example or snippet that can help other Statalist posters understand your question or response, but I've found the same is true for transmitting do-file examples to coworkers and students. It's very much in the spirit of Statalist FAQ . Many times a simple data example would prevent a lot of confusion, but creating one isn't always convenient.  Sometimes users can post relevant data examples using one of Stata's canned datasets or by building fake data through a series of -generate- commands.  Some Statalist posters will simply try to explain their data, which sometimes causes confusion and varying interpretations of the problem and the data structure. Other users might copy and paste a snippet of the data, but wrapping can be a problem. Plus, others have to add doub

More on label wrapping and -statplot-: Adding N's to your figures

While using -statplot- in the real world, we came across a situation where we needed to place the N's for sub-groups in certain value or variable labels. For these figures, the N's change over sub-datasets and when the survey data is updated with each wave, so actually writing something like "(N = 100)" into each value or variable label or graph title is repetitive.  These figures are a heavy on the information side (they'd surely be an easy target for junkcharts  for many reasons), but the real version of these figures use less N's than the examples below and they are made to mirror the output produced by Ian Watson's -tabout- (from SSC).   Here's a strategy to add some N's to graphs automatically & wrap these labels with N's.  This example follows from the examples presented in earlier posts about -statplot- here and here . ***********************************!Create Data Example sysuse nlsw88, clear **varlabels** lab var grade &quo

-obsdiff- available from SSC

-obsdiff- is a Stata module to find differences in a variable across records/observations.  It's ideal for finding the differences between rows that are near-duplicates.  This is usually the result of data that have been merged or joined in a way that created duplicates.  The solution may be to remove the extra record or reshape to move the extra observation to a new column (as is the case with Var10 below). A quick example: *******************watch for wrapping: clear inp    var1 str9 var2 var3 var4 str9(var5 var6) var7 str9 var8 var9 str9(var10 var11) 1 "a" 1 2 "c" "s" 3 "d" 5 "AA" "z" 1 "a" 1 2 "c" "s" 3 "d" 5 "BB" "z" 1 "a" 1 2 "c" "s" 3 "d" 5 "CC" "z" 2 "a" 1 2 "c" "s" 3 "d" 5 "CC"

Data cleaning with Google Refine

There's a lot to be said about the data and text cleaning abilities of programs like R [ 1 ] [ 2 ] and Stata [ 3 ] [ 4 ] [ 5 ].  But when it comes to cleaning up data with lots of spelling errors, different forms of the same string, abbreviations, acronyms, etc - or - if you've got to task a student worker who's skill set barely includes M$ Excel, then Google Refine  (it used to be called Freebase gridworks) is a great tool for cleaning data. Here's the  Google Code page and below is a video on it's data cleaning tools.  Google Refine can also transform data and access external data (like JSON data) from other websites, but I've found it most useful for data cleaning.