Combined-Score Ranking
Combined Score is the most holistic and generally relevant score presented in the W&L Law Journal Rankings, as a composite of each journal's Impact Factor and Total Cites count. The Combined Score is calculated with approximately one-third of its weight given from Impact Factor and two-thirds from Total Cites (combined journal and case cites). The resulting score is then normalized on a scale of 100 to present a ranking of all journals relative to the highest-scoring journal.
Combined Score is rounded to the nearest hundredth in the site's standard display, however the full calculated score continues to 15 places following the decimal. Within a ranked list, multiple journals with the same Combined Score appear in an order that takes into account these hidden digits; for example, a journal ranked 50 with a displaying Combined Score of 24.96 has a higher score and thus a higher ranking than a journal ranked 51 that also has a displaying Combined Score of 24.96.
The formula for obtaining the Combined Score is the addition of the weighted and normalized scores for each of Impact Factor (IF) and Total Cites (TC): ((IF x weight x 100)/highest-IF) + ((TC x (1-weight) x 100)/highest-TC). The scores are then displayed in the Combined Score column as a percentage of the largest score that exists in the retrieved set of journals. So the displayed version of the Combined Score is calculated as: (Combined Score/highest Combined Score) x 100. Thus the top-ranked journal(s) in a retrieved set of journals will always have a displayed value of 100 and other journals will have lower numbers in proportion to their ranking calculation score.
Combined Score ranking is based on the idea proposed by Ronen Perry that neither ranking by total cites nor by Impact Factor are in themselves sufficient; the two would need to be combined. See Ronen Perry, The Relative Value of American Law Reviews: Refinement and Implementation, 39 Conn. L. Rev. 1 (2006), available at https://ssrn.com/abstract=897063. The problem in any combined ranking is determining what weight to give to the underlying factors. Perry calculated a weight of 0.577 for Impact Factor (and thus 0.433 for total cites) based on the idea that Harvard Law Review and Yale Law Journal have equal prestige, and 0.577 is the weight that makes the combined Impact Factor and Total Cites counts equally for these journals over the survey period of 1998-2005. However, the default weighting used in the W&L Rankings is a value of 0.33. It was decided to use 0.33 because this weighting gives Harvard the highest combined-rank over each of 13 ranking surveys (1988-1995...2000-2007) (with the exception of the period 1991-1998 when Harvard was at 98 compared with Yale at 100). Use of 0.33 as the weighting between Impact Factor and Total Cites also aims to maximize the average of Yale Law Journal's rank during those same years.
In order to more fairly compare new journals with established journals, an adjustment to the Combined Score is made for journals which, at the survey date, have been in existence for less than eight years. For example, a journal that began in 2007 would, for the 2009 ranking, have its total cites multiplied by 7.3 (that Total Cites extrapolation does not display in the total cites column—it is used only in the Combined Score formula). The aim is to estimate from the cites to a journal over its few years of life how many cites the journal would likely have had, had it been in existence for at least eight (or from 2018 onward, five) years.
The multipliers are as follows (where the digit before the parenthetical is the difference between the survey year and the year the journal began): 0(29) 1(29) 2(7.3) 3(3.4) 4(2.3) 5(1.6) 6(1.3). These multipliers are based on a sample of three journals (American Law and Economics Review, Journal of Appellate Practice and Procedure, and Journal of Law and Family Studies), all of which began publication in 1999. The sample looked at how many cumulative cites occurred two years, three years, etc. after publication, and what multiplier each year would have predicted the 2006 total. To stay on the conservative side, the lowest multiplier among the three journals was used. The lowest multiplier in this sample for a journal that was in its second year of publication was actually 43, but this seemed to be too high a value for the volatile task of predicting from its two-year total how many cites a journal would have after eight years, so this value was arbitrarily reduced by one-third to a multiplier of 29. The same value was used if the journal is in its first year of publication; quite often there are no published cites to new journals in their first year of publication.