[meteorite-list] Classification Criteria-- Abstract

From: Mr EMan <mstreman53_at_meteoritecentral.com>
Date: Mon, 23 Jun 2008 11:28:04 -0700 (PDT)
Message-ID: <979188.34575.qm_at_web55201.mail.re4.yahoo.com>

Thanks Folks! Very appreciated responses.

To sum up what was read. The system of classification of meteorites is an evolving system which seeks to adapt as new knowledge is gained. The "stoney, stoney-iron, iron" historical classification is so passe? in the light of modern analytical tools.

There is a "tool box" of standard mineralogical test using common lab equipment plus a few specialized ones. Based on an initial subject matter expert visual inspection, a menu of tests are selected to determine content: mineral, elemental, and isotopic proportions. While not specifically stated, I assume that standard lab practice dictates the number and location of sample sites for micro probe testing, for example.

As with most things there are exceptions and not all tests, in all circumstances, are definitive in discriminating all classifications so alternate or supplemental tests are employed to refine classification else remove ambiguities.

Numbers used such as 3.1, 3.3, 5 etc are only "nominal"--names for certain values. They do not represent equal intervals. For Example, a "3" is not half a "6". A "point" 1 is not a mathematical value but a "name" for a secondary measurement just like "H" and "5" are respective names for ranges of values. They are naming conventions that represent associated, but not equal, data ranges for various aspects of mineralogy. This is akin to a model number on a washing machine. Each character means something specific about the washer but is not a sequential number.

When all the testing data is charted, the researcher looks for a best overall "fit" within plots of all other meteorites, especially those established clans the specimen appears belong to. If the fit is cleanly within all normalized values, a call can be made. On a side note I deduce, meteorites which have multiple lithologies not seen in a single sample, are sometimes given two separate classifications by two independent researchers. In this case, for the time being, our approval system doesn't pass judgment or try to resolve differences. In effect both researchers are right.

When there isn't a clean fit of data plots--If merited by a data points falling outside the envelope, the researcher should consult other specialists prior to publishing a classification. Sometimes this results in a sub-grouping or un-grouping classification awaiting other similar specimens to arrive.

The McCoy paper Sterling gave the link to answered a lot of questions. I see that Jeff was a major contributor. Thanks again to all for that great insider's perspective.

Elton
Received on Mon 23 Jun 2008 02:28:04 PM PDT


Help support this free mailing list:



StumbleUpon
del.icio.us
reddit
Yahoo MyWeb