Saturday, May 7, 2011

Unabbreviate Macro Lists in Stata

This Statalist thread from a few months ago started by Nick Mosely asked about working with hundreds of macros and eventually got onto the topic of expanding or unabbreviating (see -help unab- for the varlist version of this idea) macro lists.  Based on my posts in that thread, I recently posted -mac_unab- to the SSC Archives to help with this problem.
-mac_unab- is still a bit of a kludge solution, but I haven't figured out a better approach (nor did anyone suggest a better approach).  The biggest issues with mac_unab, which I hope to find better solutions for, include:
1.  When you run mac_unab, it will print all the contents of the -macro list- command in the Results window.  This might be desirable for some, but I'd like to be able to toggle it on/off.  Currently, the way I've gathered the macros is via a log, so there's no way to avoid printing the -mac list- output each time -mac_unab- is run.
2.  Currently, the program will only match macros with the pattern stub*, so you specify what the macros begin with and an asterisk to indicate that you want to match everything with any letters following that prefix.  I'd like to expand those capabilities to match macros based on more complex matching rules like those in -help varlist-, such as  *mymacro*, my?macro, my~macro, etc.  Regardless, the names of your macros will need to be systematic to take advantage of mac_unab, but I'd like to relax the formatting requirements necessary to match of macro names.
The syntax follows closely to -unab- for unabbreviating varlists.
Here's an example:

//Create some data//
    . clear
    . set obs 10
    . g x = round(runiform()*100, .05)
    . g x2 = int(runiform()*100)
    . replace x = -2.5 in 1
//Convert Numbers to Text//
    . num2words x, g(x_converted)
    . num2words x, g(x_rounded) round
    . replace x_converted = proper(x_rounded)
    . num2words x, g(x2_ordinal) ordinal
//Use Converted Text in Graph//
    . egen mx = mean(x)
    . num2words mx, round
    . gr bar x , over(x2_ordinal, sort(1)) ///
        note( X for Obs 2 is `=x_rounded[2]') ///
        text(60 20 `"Mean = `=mx2'"', box )



Sunday, May 1, 2011

For Computers, understanding natural language is sometimes hard ...

This paper by Chloe ́ Kiddon and Yuriy Brun at U of Washington describes a bayes classifier that can be used to find accidental double entendres or "potential innuendos" (called "That's what she said" or TWSS jokes) in sentences.  Here's the ruby script to run this classifier to identify so-called "low brow comedy" (their words, not mine) in natural, human language .
Hopefully, this foreshadows the great things we can expect from our computers' auto-complete functionality in the near future. This article from Wired on detecting humor with computer software is also relevant. Andrew Gelman, a bayesian scholar and co-author of the great zombie survey paper^, links to this article in his blog after I recently mentioned it to him.


___
^ This paper contains a Technical Note, describing the authors' rationale for using LaTeX, that is one of my all-time favorite quotes: 
"We originally wrote this article in Word, but then we converted it to Latex to make it look more like science."

Saturday, April 30, 2011

A new SSC package to convert numbers to text (-num2words-)

-num2words- has been posted to the SSC Archives.  It is a Stata module to convert numbers to text.  It can convert integers, fractional numbers, and ordinal numbers (e.g., 8 to 8th).  The idea for this program originated from a LaTeX report I was creating that had some code that wrote the text version of numbers into sentences, including writing the proper case text for a number if it started a sentence.  So, the LaTeX file (written via -texdoc- from SSC) had some code like:
****texdoc example
sum x, meanonly
loc totalN "`=_N'"
loc pct1  "`=myvar[1]'"
loc totalN "`r(N)'"
if `totalN'>`lastN' loc change1 "increase"
****texdoc text written:
tex  `totalN' respondents took the survey this month.
tex  There was a `pct1' percent `change1' in respondents who reported using incentive payment dollars....and so on
****


where the macros are defined as:
`totalN' - the total number of relevant respondents (so, loc totalN "`=_N'")
`pct1' - the calculated percent respondents from the dataset (so, loc pct1  "`=myvar[1]'")
`change1' - substitutes the word "increase" in if `pct1' increased from the last survey wave, "decrease" for a decrease and "equal to" if it was the same.
I created -num2words- to ease the process of converting many variables--like those underlying the macros `totalN' and `pct1'--to words/text, so it can change "212 respondents" to "Two-hundred and twelve respondents" or  "25.5 percent" to "twenty-five and 5 tenths percent" (or it can be truncated to just "twenty-five percent" with the "round" option in -num2words-) in the narrative.
You can also automatically change numbers to words for insertion in table or figure titles, notes, etc.
******************fig 1 example
clear
set obs 10
g x = round(runiform()*100, .05)
num2words x, g(x_rounded) round
**graph**
egen mx = mean(x)
num2words mx, round
gr bar x  , over(x2_ordinal, sort(1)) ///
note({bf: X for Obs 2 is `=x_rounded[2]'}) ///
text(60 20 `"Mean = `=mx2'"',  box )
**********************fig 1 example
Fig. 1

You can get -num2words- from the SSC Archives [1][2]

Visualization of my iPhone tracking data

Last week, the internets were flooded with panic about the iPhone storing location data in a sqlite DB.  The DB (called consolidated.db) contains longitude, latitude, altitude, accuracy, and timestamp information for near by wifi hotspots and GPS locations (when maps apps are used).  You can take a look at the data stored on you by your iPhone by using the iPhonetracker app (for Mac OSX) or if you've got a Windows machine you can find the conslidated.db file and load it into a database software.
I used the iPhonetracker.app program to visualize the data on my recently purchased iPhone 4 (I had a 3G until March, so I've only got 1 month of data to visualize--which in a way makes it easier to see how much tracking is really going on since I can easily recall where I've traveled in the past month).  Here is the graph that iPhonetracker shows:

The advantage of this tracker app is that it visualizes the movement over time.  However, it's difficult to see the locations on the map and since it's using a heat map to display the frequency that each location was in my DB -- that is, it's hard to see some of the smaller dots on the maps.  As you can see, I spent most of my time since March in College Station and Waco, TX.  The dark purple spot in CS is near my building on campus.  
There are a couple of reasons that, after looking at this data, I am not too concerned about the tracking capability.  
First, when examining the timestamp data and my location, it is off by a few hours and even a full day in several cases -- such as the days I traveled to Waco. 
Also, notice that the tracking did not pickup any locations during the drive to Waco (I guess that's because I didn't use it for GPS directions and there probably aren't many wifi locations between College Station and Waco).  
Finally, the tracking it not very precise. There are lots of small dots on the map that show locations I've never been, including the dots far outside of Waco near Gatesville and Hico.  Also, when I zoom in on the map and examine the records in the DB, it puts in lots of places in the College Station area that I haven't traveled to in the last month. My only guess is that the blips are due to wifi hotspots or cell towers that my cell phone connected to but have either 1) a really long range or 2) somehow have inaccurate longitude/latitude entries.
  If you don't want to use IPhonetracker.app or you don't have a Mac, you can extract the data yourself by first finding the "consolidated.db" file in your iPhone backups maintained by iTunes (assuming they are unencrypted or you have unencrypted them). An easy way to find this file is to run this python script with the command:
"~//iphonels.py" | grep "consolidated"
this will tell you the name of the file containing your tracking DB.  Open this file in your favorite sqlite software.  In Mac OSX you can open/import this from terminal with the commands:
>sqlite3 "yourfilenamehere"
>CREATE TABLE CellLocation (MCC INTEGER, MNC INTEGER, LAC INTEGER, CI INTEGER, Timestamp FLOAT, Latitude FLOAT, Longitude FLOAT, HorizontalAccuracy FLOAT, Altitude FLOAT, VerticalAccuracy FLOAT, Speed FLOAT, Course FLOAT, Confidence INTEGER, PRIMARY KEY (MCC, MNC, LAC, CI));
then export the table to csv or whatever format you'd like.  Next, I uploaded this data to google maps and created the map below:


This google map shows the erroneous entries points more clearly.  Thinking that this might have to do with the "horizontal accuracy" column in the DB, I examined the raw data for these erroneous locations, but "horizontal accuracy" was about the same as all the entries, so it wasn't an accuracy problem.  Regardless, I'm not too worried about Apple or someone else using this data for no-good.

Update
Crowdflow.net is looking for donated iPhone tracking data so that they can make visualizations like this one.  They'd better hurry and gather the data they need before the next iOS update that makes this tracking file more difficult to locate/access.

Tuesday, April 26, 2011

-writeinput- available from SSC

-writeinput- was recently posted to the SSC Archive[1][2].  I've written a bit about it before.  My announcement of this program focuses on using this program for creating a self-contained dataset example or snippet that can help other Statalist posters understand your question or response, but I've found the same is true for transmitting do-file examples to coworkers and students. It's very much in the spirit of Statalist FAQ.
Many times a simple data example would prevent a lot of confusion, but creating one isn't always convenient.  Sometimes users can post relevant data examples using one of Stata's canned datasets or by building fake data through a series of -generate- commands.  Some Statalist posters will simply try to explain their data, which sometimes causes confusion and varying interpretations of the problem and the data structure. Other users might copy and paste a snippet of the data, but wrapping can be a problem. Plus, others have to add double quotes to string variable values with embedded spaces in order to get the data into Stata to respond to the thread.
If you really want to get your point across, writing an -input- example is a good option.  However, this can be time consuming.  You'll want to put quotes around string variable values, and formats for variables you define.  -writeinput- helps automate this process by letting you create a -input- statement for the entire dataset or a selected subset of your data.
Finally, since the do-file editor in Stata 11 was improved to accommodate very large files, you can now save example datasets (or entire datasets -- though I wouldn't suggest it) along with do-file code.  -writeinput- makes all of this easier.
Here's an example:
clear
//install writeinput from SSC//
cap which writeinput
if _rc ssc install writeinput, replace
//example//
sysuse auto, clear
writeinput make mpg price for in 1/5 using "test1.do", repl
writeinput make mpg price for if for==0 in 20/60 using "test2.do", ///
    replace n(Here's some notes)
writeinput make if for==1 & pri>200 in 1/50 using "test3.do", ///
    replace n(write some notes here)
type "test3.do"

More on label wrapping and -statplot-: Adding N's to your figures

While using -statplot- in the real world, we came across a situation where we needed to place the N's for sub-groups in certain value or variable labels.
For these figures, the N's change over sub-datasets and when the survey data is updated with each wave, so actually writing something like "(N = 100)" into each value or variable label or graph title is repetitive.  These figures are a heavy on the information side (they'd surely be an easy target for junkcharts for many reasons), but the real version of these figures use less N's than the examples below and they are made to mirror the output produced by Ian Watson's -tabout- (from SSC).  
Here's a strategy to add some N's to graphs automatically & wrap these labels with N's.  This example follows from the examples presented in earlier posts about -statplot- here and here.
***********************************!Create Data Example
sysuse nlsw88, clear
**varlabels**
lab var grade "Really Long Variable Label for the Variable GRADE that will cutoff at 80 chars"
lab var tenure "Another Really Long Var Label for the Variable TENURE that will cutoff at 80 chars"
lab var wage "Long Variable Label, this time for the Variable WAGE that will cutoff at 80 chars"
 d grade tenure wage
 replace wage = . if race==3
***********************************!Create Data Example

Figure 14 shows how to add automatic wrapping and Ns to variable labels (watch for wrapping -- download entire do-file with link at the bottom of this page):
***********************!beginFig14 
loc vars grade tenure wage
loc ll 25 //sets the length of the labels
loc j = 1
foreach u of local vars {
**calc N**
qui count if !mi(`u')  
loc nn `r(N)'
di "`j'"
loc len = length("`:var l `u''")
if `len' > `ll' {
loc pieces "`=round(`len'/`ll')'"
forval p = 1/`pieces' {
loc p`p' : piece `p' `ll' of "`:var l `u''", nobreak
loc relabeling`j' `" `relabeling`j''  `"`p`p'' "'   "'
}
}
loc relabeling `" `relabeling'  `j'`"`relabeling`j'' "(N=`nn')" "' "'
loc `j++'
}
di "`relabeling'"
loc totalN = _N
****
statplot `vars',  ///
    name(g2, replace)  over(race) graphregion(margin(vlarge)) ///
    varopts( relabel( `relabeling' )) ///
    title("Wrapping Long Variable Labels - Automatically") ///
     note(Total N = `totalN')
graph export "fig14.png", as(png) replace
***********************!endFig14



Monday, April 25, 2011

-obsdiff- available from SSC

-obsdiff- is a Stata module to find differences in a variable across records/observations.  It's ideal for finding the differences between rows that are near-duplicates.  This is usually the result of data that have been merged or joined in a way that created duplicates.  The solution may be to remove the extra record or reshape to move the extra observation to a new column (as is the case with Var10 below).

A quick example:

*******************watch for wrapping:
clear
inp    var1 str9 var2 var3 var4 str9(var5 var6) var7 str9 var8 var9 str9(var10 var11)
1 "a" 1 2 "c" "s" 3 "d" 5 "AA" "z"
1 "a" 1 2 "c" "s" 3 "d" 5 "BB" "z"
1 "a" 1 2 "c" "s" 3 "d" 5 "CC" "z"
2 "a" 1 2 "c" "s" 3 "d" 5 "CC" "z"
end
obsdiff var1 var2 , r(1/2)
obsdiff , all
obsdiff, r(1/4)
**var10 is different across records
*-- we'll reshape to stack it wide across columns
bys var1: g j = _n
reshape wide var10, i(var1) j(j)

*******************
The output is just the -list- output for the values and rows that are different within each variable.  Since I haven't figured a way to put this all into one nice table yet, the output can get a bit unwieldy when you're examining many rows and many variables.  One solution is to use the "using" option to send the log to an external file for examination.

Sunday, April 17, 2011

Data cleaning with Google Refine

There's a lot to be said about the data and text cleaning abilities of programs like R [1] [2] and Stata [3] [4] [5].  But when it comes to cleaning up data with lots of spelling errors, different forms of the same string, abbreviations, acronyms, etc - or - if you've got to task a student worker who's skill set barely includes M$ Excel, then Google Refine (it used to be called Freebase gridworks) is a great tool for cleaning data.
Here's the  Google Code page and below is a video on it's data cleaning tools.  Google Refine can also transform data and access external data (like JSON data) from other websites, but I've found it most useful for data cleaning.  

Monday, April 11, 2011

LaTeX Short Course Material Available

For those interested in LaTeX, I've posted the presentation, handouts, and related materials from a recent short course (taught by Emily Naiser and myself) on getting started with LaTeX for both Windows and Mac OSX.  We're teaching it again this summer.

Monday, March 28, 2011

Some -statplot- examples, Part 2 (wrapping long labels)

...continued from Part 1...
Part 1 of this post covered some advanced examples of -statplot-, focusing on the use of combinations of over() and by() options.
In Part 2, I examine some strategies to use -statplot- with really long variable and/or value labels.  Recently, I was using -statplot- to create some tables in a paper where some of the labels in the tables needed to be the (longish) question and answer choice text, I discovered how long labels can really be a pain for graphs.  This is a problem for any graph in Stata, regardless of whether your labels are in the legend or at the axis; however, my preference is that long labels (up to a limit) look better at the axis.
So, the examples below show how to use -statplot- options to create wrapped labels.  I hope to create an option to help make this a part of -statplot- at some point in the future, but for now, the code below is a good template for helping you to automate wrapping labels.  This can be extended to other plotting packages/commands.
Continuing from the last post, we're using the in-built "nlsw88" dataset.  Let's first look at plots with long variable labels first, and then we'll look at long value labels (which are a bit more complicated).
Note: Please make sure you update your -statplot- to the latest version since an earlier version of the program bites when you have double quotes in suboptions, as I do in the examples below.


1. Wrapping Long Variable Labels
Figure 9 (below) shows what happens when we have really long variable labels for grade, tenure, and wage.
*********************************begin
sysuse nlsw88, clear
**varlabels**
lab var grade "Really Long Variable Label for the Variable GRADE that will cutoff at 80 chars"
lab var tenure "Another Really Long Var Label for the Variable TENURE that will cutoff at 80 chars"
lab var wage "Long Variable Label, this time for the Variable WAGE that will cutoff at 80 chars"
d grade tenure wage
****************!beginFig9
statplot grade tenure wage,  ///
    tit("Long Variable Labels", size(small))
****************!endFig9
*********************************end





Sunday, March 20, 2011

Some -statplot- examples

-statplot- (co-authored by Nick Cox and myself) was released earlier this month.  You can get it at the SSC [1] [2].
In this posting, I show you some more advanced examples of using -statplot- using the Stata nlsw88 dataset (-sysuse nlsw88.dta-).  [Note: Click on any of the graphs below to see a larger example in a new tab/window.]
First, a basic example of -statplot- might look like:
***********************!begin
sysuse nlsw88.dta, clear 
statplot grade tenure wage, blabel(bar) subtit({it:-statplot-} example)
graph export "fig1.png", as(png) replace
***********************!end

Fig. 1
The main advantage of -statplot- is creating plots of summary stats with the labels moved from the legend (the usual placement when using -gr bar|hbar|dot-) to the axis.  So, I could create a graph of the same data above with something like:
graph hbar (mean) grade tenure wage
however, it would look like the graph on the left in Fig. 2 below, where we still have a legend and an array of colors indicating each bar.  I often need to produce these types of graphs but with the labels on the axis (instead of the legend).  To get this type of graph using -graph bar|hbar|dot-, I might run something like:
******!
collapse (mean) grade tenure wage
xpose, clear varn
graph hbar v1, over(_varname)
******!
which does produce something like the -statplot- graph on the right in Figure 2, but in a less-straightfoward way, and in a way that is difficult to extend to other configurations (multiple vars in the varlist, multiple over()or by() categories, etc).  
Figure 2 compares the syntax and output of -graph hbar- and -statplot-:
***********************!beginFig2
graph hbar (mean) grade tenure wage,  ///
    name(g1, replace) tit({it:{bf:-graph bar-}})
statplot grade tenure wage,  ///
    name(g2, replace) tit({it:{bf:-statplot-}})
*After running the commands above, compare the graphs with graph combine: 
gr combine g1 g2 
***********************!endFig2

Tuesday, January 4, 2011

ResearchNotes moved to new sub-domain -- Update your Feed Links

I've moved the site from http://eric-a-booth.blogspot.com to a sub-domain at my site:


The old 'blogspot' address will continue to forward to the new site hosted on my subdomain for a while, but please update your bookmarks,  feed subscription, or email subscription.

Monday, January 3, 2011

TextWrangler and Stata

With the introduction of syntax highlighting and the ability to handle larger do-files, I've started to use the built-in Stata do-file editor more and more in lieu of BBS's TextWrangler.

However, several times a week I still find myself firing up TW for more complex tasks.  Most often it's when I need to show differences 2 or more versions/revisions of code, do a complex find/replace, character substitution, duplicate line deletion, or regular expression search.  I use it for FTP uploads and inspecting/opening text-versions unknown filetypes on my Mac OSX.  And occasionally, I'll open it if I'm working simultaneously on several do-files since the Mac version of Stata 11 still doesn't support tabbed do-files.

I use the TW for Stata scripts found here and I use the customizable shortcut functions in OSX 10.6 to send my do-files or a section of my do-files to Stata from TW.

The only issue I have with the script above is that it's outdated.  The DataNinja site hasn't been updated in a while and I don't expect it to be any time soon, so it's unlikely that I'll get syntax highlighting for new commands that have been created since then.

One way I've found to update the TW scripts is to grab the new "official" syntax commands from the Stata .app package contents (see #1 below) and then grab the list of all other commands/ado-files I've downloaded from SSC or elsewhere (see #2 below) and add all these to the outdated stata.plist file for TW.


Thursday, December 30, 2010

Finding your way around Stata

One of the things my students first get stuck on is how to find things (e.g. files, directories, variables with particular labels or notes) in Stata.
There are a lot of commands to find things like files/datasets, directories, command help documentation, user commands/ado-files, variables, values, notes/chars, etc -- there are some commands that find only one of these things, some commands can find several of these things, and most of these things can be found by more than one command.  It can be a bit overwhelming and confusing and I've found that students who fall behind early in a class using Stata often get stuck at the point of being able to find these things -- particularly directories and command ado/help files.

Of course good use of a search engine is a key resource, but the table below gives an overview of the commands I use to find these things in Stata (this table can also found in my Module 1 Lecture for PHPM 672).    Undoubtedly, there are other commands that will do these tasks, but these are the ones that stuck with me after I started using them.

Sunday, December 26, 2010

Fun with Stata: Games for Stata Edition

Over at Mitch's "Stata Daily" blog, he describes a "hangman" game sent to him by Marek Hlavac.  I'm a sucker for non-standard uses of Stata (e.g., [1] [2] [3]), so I played with it for a while.  This also convinced me to make public one of my earliest attempts at writing a Stata ado-file/program:  -blackjack-.

The game is played by typing -blackjack- into the command window and then the game prompts the user for the amount she wants to bet (default is $500 which replenishes after you lose it all or you exit Stata), and whether to hit or stay.  It doesn't accurately represent all the rules and scenarios of a real game a blackjack (e.g., no doubling down), so don't use it to prep for your run at taking down a Vegas casino.

Fair warning that -blackjack- is visually quite ugly (the cards tend to misalign; I could have come up with a better card design for face cards than a "{Stata}" center; and (because I was learning about Stata chars) I used some ascii symbols for suits instead of something simple like K, Q, J, A ) and I've run into the occasional bug that I haven't taken time to investigate & fix.
One thing I like about Hlavac's -hangman- is how he uses subprograms to define and display the stages of building the hangman.  I wish I had thought about this for displaying my cards -- it probably would have saved a lot of copying/pasting of -if- loops displaying the various card configurations.

Writing/tinkering with the ado-file for this game probably provided more amusement for me than actually playing it. It's a great mindless activity to do if you're doing some Stata coding and need a break.    Check out -blackjack- here.

At the Stata Daily blog, Nick J. Cox comments about some other Stata games/simulations/etc available at SSC:  -chaos- and -irrepro-. Also, I mention similar programs -dice-, -cards- (which I cannot get to work on Stata 11), and -heads- from UCLA's Stata page, see:
****
net install dice, ///
from(http://www.ats.ucla.edu/stat/stata/ado/teach) ///
replace all
****
All these are fun (and possibly instructive) programs for Stata.

Monday, December 20, 2010

Creating example datasets for collaboration with other Stata users

I'm lucky to be in a research environment where most of my colleagues and students use Stata.  Also, I regularly participate on Statalist.  Both of these have helped pushed me to periodically refine my habits when it comes to communicating about Stata.

When it comes to asking questions on Statalist, I've tried to stick closely to the Statalist FAQ and other tips mentioned by William Gould on the Stata NEC Blog.  However, for answering questions on Statalist, I find Maarten Buis's page on his Statalist postings especially helpful .

I've learned a lot from Maarten's FAQ about
(1) the types of questions that are not obvious to others on Statalist (and this tends to translate over to my students & colleagues as well) and
(2) ways to minimize this confusion by doing things as simple as creating clearly marked, self-contained working examples of code or using commenting to help create a roadmap for the code in an example as well as avoid issues with wrapping of code.

When it comes to creating clearly marked, self-contained examples for others, there are a couple of standard tools:
  • Using a canned Stata dataset for the example (as Maarten mentions )
  • Creating a fake dataset using a variety of -generate-, -replace-, or random data functions.  See my previous post about adding a random, fake string function (-ralpha-) to this set of tools.
  • Finally, if you cannot easily get the structure you need for an example from a canned or easily -generate-d dataset, you can always create a data example using -input-
The idea behind -input- is that I can insert a working example into a do-file or Statalist posting that is self-contained.  Running the code below will -input- this data example into Stata's memory:

***************!
clear
inp   str14(state) pop str2(state2) divorce region marriage pop65p
"Alabama" 3893888 "AL" 26745 3 49018 440015
"Alaska" 401851 "AK" 3517 4 5361 11547
"Arizona" 2718215 "AZ" 19908 4 30223 307362
"Arkansas" 2286435 "AR" 15882 3 26513 312477
"California" 23667902 "CA" 133541 4 210864 2414250
"Georgia" 5463105 "GA" 34743 3 70638 516731
"Hawaii" 964691 "HI" 4438 4 11856 76150
"Idaho" 943935 "ID" 6596 4 13428 93680
"Illinois" 11426518 "IL" 50997 2 109823 1261885
"Indiana" 5490224 "IN" 40006 2 57853 585384
"Iowa" 2913808 "IA" 11854 2 27474 387584
"Kansas" 2363679 "KS" 13410 2 24847 306263
"Kentucky" 3660777 "KY" 16731 3 32727 409828
"Louisiana" 4205900 "LA" 18108 3 43460 404279
"Maine" 1124660 "ME" 6205 1 12040 140918
"Maryland" 4216975 "MD" 17494 3 46278 395609
"Massachusetts" 5737037 "MA" 17873 1 46273 726531
"Michigan" 9262078 "MI" 45047 2 86898 912258
end
***************!