What do “Astronomy 101” (hereafter, Astro 101) instructors teach in their one-semester courses that “cover the Universe”? Clearly, such a cosmic goal is impossible. The implicit sense in the community is that a consensus cannot be reached because each instructor has his or her “pet” topics. This attitude has resulted in little guidance for faculty, especially because most Astro 101 instructors do not have a degree in astronomy, and many are teaching the class for the first time (Fraknoi 2001; Zeilik 1997).
Astronomy textbook publishers would be overjoyed to have such a list; their market research reinforces the notion that the books include many topics that any one instructor may not choose to teach. The marketing goal is to offer a range so that instructors may make selections for their courses, and no crucial topic is left out that might result in the loss of a large adoption. Nothing gets the attention of an editor more than a sales representative calling about a large adoption and asking if Zeilik's book (Astronomy: The Evolving Universe) includes orbital resonances in the asteroid belt (it does not). Then the author is under extreme pressure to add that topic just for that one adoption! My experience since 1975 is that peer reviewers are quick to add topics but extremely reluctant to cut them out. This dynamic again results in books with pretty exhaustive (and similar) topical coverage, but does not converge on a core set of concepts.
This article presents an innovative process, based on cognitive ranking and relatedness ratings tasks, to reach a consensus from two different sample populations. This method can be applied to larger populations to reach a much-needed community consensus.