"What did she say?"
A brief glossary of jargon from the culture of science.


PI: principal investigator, the person in charge of a research project and often of the laboratory as well. Usually synonymous with graduate advisor. "The boss."

Post-doc: a member of a research group who already has a Ph.D. (or M.D.) but is not a professor. Depending on the PI, post-docs are given different levels of responsibility and authority; sometimes they are asked to help supervise training of graduate students, while in other instances they are left to work on their own projects within the lab.

Technician: an employee of a lab, or of the department the lab belongs to, whose responsibilities generally include data collection but not experimental design. Sometimes treated largely as a piece of lab equipment, sometimes recognized as a valuable resource in training graduate students in paarticular laboratory methods.

Graduate student: is it a student or an employee? Primarily concerned with learning how to think like a scientist or with amassing more data to advance the boss's research? It depends on who you ask.

Hypothesis: a hunch about how a particular piece of the world might be. An hypothesis can be very speculative or extremely well supported by experimental findings.

Theory: a set of hypotheses that presents an account of what the world (or a piece of it) is like: what sort of stuff it's made up of, and how that stuff behaves. As with hypotheses, theories can be more or less well supported by observations.

IRB: institutional review board, a committee at a school or university that evaluates proposed research to make sure it complies with the institution's guidelines (e.g., for use of animal or human subjects, procedures for dealing with patentable discoveries, etc.).

IACUC: institutional animal care and use committee, a review board that deals specifically with issues around research involving animals.

NSF: National Science Foundation, a major source of research grants.

NIH: National Institutes of Health, a major source of research grants.

trimming: leaving out data points that have been collected from the data set being presented and/or analyzed. There may be better and worse ways to trim.

cooking: making up data (or "slightly modifying" actual data) to achieve a better fit with one's hypothesis

mining/dredging/fishing: examining an existing data set for patterns, evidence of the existence or nonexistence of particular phenomena, etc. Discouraged by those who feel that the hypothesis to be tested must be specified in advance of experimentation in order to collect data objectively.

null hypothesis: essentially a default assumption being tested by research (e.g., that a compound won't be harmful, that two factors are not causally connected to each other, etc.).

false positive (or Type-I error): on the basis of data, rejecting a null hypothesis that is actually true. (For example, if the null hypothesis is that a new sweetner isn't carcinogenic, data indicating that it is carcinogenic when it really isn't would be a "false positive".)

false negative (or Type-II error): on the basis of data, failing to reject a null hypothesis that is actually false. (The null hypothesis is that the new sweetner isn't carcinogenic. If the data indicate that it isn't carcinogenic when in fact it really is carcinogenic, that would be a "false negative".)

data vs. noise: data are the measurments taken to reflect what's really going on in a system, while noise describes measurements that have more to do with the instruments, random errors, etc. Figuring out how to tell one from another is a major challenge for scientists.

accuracy: how close a measurement or prediction is to reality it is trying to decribe. An accurate measurement in not always very precise. It is more accurate to say the Earth takes a year to orbit the sun than to say it takes two years; it is (nearly) equally accurate to say it takes 365.25 days.

precision: how fine-grained a measurement or prediction it. 1.00034 meters is more precise than 1 meter. However, a precise measurement is not always accurate.

peer review: the process by which scientific reports are read and critiqued by other scientists before they are accepted for publication at scientific journals. Usually, the author of a report receives peer reviews of the report and must revise his or her manuscript to address the issues raised in the review. (Sometimes this requires subjecting data to further analyses or even doing additional experiments to collect more data.) The peer review process is usually described as a way to ensure quality control over the scientific literature.

error bars: in general, a way to represent how much uncertainty is associated with an experimental result.

Let me know if other bits of jargon come up that you'd like to see added to the glossary!

course home course information handouts assignments instructor information research links research ethics resources