2. History of Pregnancy Test
For centuries, women have tried to recognize pregnancy in a reliable way as early as possible, although it was not always as easy as it is today. Women have always preferred to check it as individually as possible in intimate conditions and in the simplest and most reliable way possible. Although there were no scientific methods for detecting early pregnancy in a woman until the 1920s, and the first home pregnancy tests did not appear in the United States until 1976, the brilliant history of “laboratory” pregnancy tests began in ancient Egypt
[3][4][7,8].
2.1. Ancient Egypt
The history of the pregnancy test dates back to ancient Egypt. Ancient Egyptian physicians have rightly noticed that the best material for laboratory tests to detect pregnancy is urine
[4][8]. Four thousand years ago, the Egyptians developed the first in vitro diagnostic test to detect a unique substance present, according to their observations, only in the urine of pregnant women. It is the most famous and advanced of the ancient pregnancy tests. In Egyptian papyri dating from 1500–1300 B.C.E., there is a description of the pregnancy test used in ancient Egypt. This test consisted of watering cereal seeds (wheat and barley) with the urine of the examined woman every day for about 10 days. The sprouting of cereal grains meant that the woman was pregnant. No germination was a negative result (no pregnancy). According to the available descriptions, this test allowed not only to detect pregnancy but also to determine the sex of the developing child. According to the ancient Egyptians, if the germination of wheat preceded the germination of barley, it indicated a girl, and if the germination of barley came first, a boy was to be born
[4][5][6][7][8][8,9,10,11,12]. It is worth noting, however, that there is no agreement in the translation of this papyrus either as to the cereal species used in the test or as to the interpretation of their germination in relation to gender recognition. Ghalioungui et al.
[7][11], analyzing translations of the text made by various Egyptologists, note that three different grains are served (wheat, barley, and buckwheat). Also, the germination of wheat is not always considered an indicator of the female sex of the fetus since buckwheat is also mentioned as a cereal that indicates the female sex of the fetus. However, it can generally be assumed that the test itself, according to the available data, was performed as described above and that the sprouting of the cereals meant that the urine was from a pregnant woman.
The ancient Egyptian test of sprouting grains has been verified several times. In 1933, Manger
[9][13] conducted an experiment on a sample of 100 pregnant women whose urine he used to water grains of wheat and barley. On the basis of the obtained results, he concluded that the urine of pregnant women actually accelerates the germination of cereals and that faster growth of barley than wheat means a girl, while neither accelerated nor delayed growth of barley means a boy. He estimated the effectiveness of the test in recognizing the sex of the child at 80%, although the conclusion was not consistent with the observations of the ancient Egyptians as to the relationship between the type of germinating grain and the sex of the fetus. In 1963, Ghalioungui et al.
[7][11] performed an analogous experiment with 40 urine samples of pregnant women, using two types of control—urine samples of non-pregnant women and men, and distilled water. They found that 70% of the urine samples of pregnant women stimulated the germination of cereal grains. None of the urine samples of non-pregnant women or men showed such activity. Cereal germination was unrelated to fetal sex
[7][11].
The ancient Egyptians established empirically that the urine of pregnant women could stimulate seed germination. This is probably due to the increased concentration of estrogen in the urine of women in the early stages of pregnancy (
Figure 1). Human estrogens, like phytoestrogens, can affect the initiation of germination and stimulate plant development
[10][14].
Figure 1. Comparison of the dynamics of hormonal changes during the menstrual cycle in women: (A) without conception and (B) with conception; FSH—follicle-stimulating hormone, LH—luteotropic hormone, hCG—human chorionic gonadotropin.
2.2. From Hippocrates to Gallen
In ancient Greece (ca. 400 B.C.), methods of detecting pregnancy, both the Hippocratic and Hellenic schools, were very similar to the methods used by the Egyptians. It was still believed that the urine of pregnant women contained life-giving components that stimulate seed germination
[11][12][15,16]. However, methods directly interfering with the woman’s body were also readily used. One such method was the onion test. In this trial, an onion was inserted into the woman’s vagina and left there overnight. If, in the morning, a woman’s breath smelled of onions, it meant that she was not pregnant. It was believed that if the woman was pregnant, the smell of onion from the vagina could not get into the woman’s mouth. Pregnancy was also diagnosed when, after a woman consumed honey dissolved in water, her stomach was distended and painful. The Greeks, like the ancient Egyptians, also believed that if a woman felt nauseous after drinking milk or from the smell of beer, it meant that she was pregnant
[6][12][10,16]. These theories, along with the development of trade, became known in all European countries. These tests were widely used until the Middle Ages.
2.3. From the Middle Ages through the Seventeenth Century
Perhaps slightly more empirical techniques were used in the Middle Ages. Using visual assessment of the physical characteristics of urine (e.g., color, clarity) to detect pregnancy became a popular method at the time. Doctors specializing in uroscopy, the so-called “Piss Prophate”, appeared in Europe; they specialized in the diagnosis of many diseases based on the visual assessment of a urine sample, i.e., uroscopy. Medieval uroscopy was a medical practice that involved the visual examination of urine for the presence of pus, blood, color translucence, or other lesions. The roots of uroscopy go back to ancient Egypt, Babylon, and India and were especially important in Byzantine medicine. These techniques were commonly used by Avicenna
[12][16]. According to the guidelines of medieval uroscopy, the urine of a pregnant woman was clear, light lemon, and turning to whitish, with a foamy surface
[6][12][13][14][10,16,17,18]. Other urine tests were also used in the Middle Ages. For example, it was believed that milk floated on the surface of a pregnant woman’s urine. At the time, some physicians believed that if a needle inserted into a vial of urine turned rust-red or black, the woman was probably pregnant
[6][12][13][14][10,16,17,18]. Another popular test involved mixing wine with urine and observing the changes
[14][15][18,19]. Today, we know that many of these tests used the presence of protein in the urine and the changes in urine pH of pregnant women due to hormonal changes associated with pregnancy. Indeed, protein-containing urine can be cloudy and frothy. Alcohol, on the other hand, reacts with some proteins in the urine, precipitating them. A more alkaline urine pH can darken some metals or remove rust. Pregnant women have higher levels of protein in their urine than non-pregnant women, and their urine pH is more alkaline, so these tests may have been quite effective for the time.
Various provocative pregnancy tests have also been used. Some doctors advised a woman suspected of being pregnant to drink a sweet drink before going to bed. If a woman complained of pain in the navel in the morning, the pregnancy was confirmed. In the 17th century, some doctors gave a woman a ribbon dipped in her urine to sniff. If the smell of this ribbon made the woman feel sick or vomit, it meant that she was probably pregnant
[5][15][9,19]. In another test from the 17th century, a ribbon dipped in a woman’s urine was then burned in a candle flame. If the smell of smoke made the woman feel sick, it meant that she was probably pregnant
[16][20]. It is difficult to find any other logical explanation for these tests than the natural tendency of pregnant women to feel excessively nauseous, which is due to hormonal changes caused by pregnancy.
2.4. Nineteenth Century
The nineteenth century did not bring anything new in this area; the main material for study was still urine. Nevertheless, researchers tried to approach this study in a more rational way. Attempts to link the microscopic examination of urine (bacteria or crystals) with pregnancy were made. In the 19th century, French doctors used a urine test called the “Kyesteine pellicle” as a method of pregnancy detection. The formation of a sticky film on the surface of the urine of pregnant women after standing in a vessel for several days was observed; it was called the early pregnancy membrane
[17][21]. The diagnosis of pregnancy, however, was based mainly on the observation of physical changes in the body of a woman and the presence of characteristic symptoms of this condition, such as morning sickness
[15][19].
2.5. From the 1920s to the 1960s
The first major steps towards constructing a reliable pregnancy test became possible in the 1920s after the discovery of a hormone present only in the urine of pregnant women. It was called human chorionic gonadotropin (hCG). That scientific discovery finally found a reliable, empirical marker that could be used for testing purposes. Since this discovery, all used pregnancy tests have been based on detecting the presence or absence of hCG in the urine
[18][19][22,23].
Until the 1960s, pregnancy tests were mainly biological methods involving laboratory animals, mainly mice, rabbits, and a specific species of toads
[17][18][19][20][21][22][23][24][25][26][27][28][29][30][31][32][33][34][35][36][37][38][39][40][41][42][21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46].
In 1927/1928, German scientists Selmar Aschheim and Bernhard Zondek developed the first biological pregnancy test, known as the Aschheim–Zondek test (A–Z test), which detects the presence of hCG in the urine (
Figure 2).
Figure 2. Aschheim–Zondek test (mouse); an illustrative scheme. The "red dots" illustrate changes in the ovaries of mice.
To test for pregnancy, the woman’s urine was injected into an immature male rat or female mouse. When the urine was from a pregnant woman, the injected rat exhibited an oestrous reaction despite its sexual immaturity. In the mouse version, sexually immature 3- to 4-week-old female mice weighing an average of 6–8 g were used for the A–Z tests. Five mice were used for each pregnancy test and injected subcutaneously with a total of 1.2–2.4 mL of urine in up to six doses over a period of forty-eight hours. The result was read, on average, after one hundred hours, and a positive result (pregnancy) was indicated by ovarian congestion, the presence of hemorrhagic bodies and numerous corpora lutea. Changes in the ovaries were assessed microscopically. Had the patient not been pregnant (negative), there would have been no such reaction in the mouse ovaries. The positive result A–Z test in the urine of pregnant patients was associated with the presence of gonadotropin in the urine, which is associated with pregnancy. It is also interesting that during the early A–Z test studies, scientists discovered that testicular tumors can also produce hCG
[18][20][24][22,24,28]. Furthermore, hyperemia of the rat ovary has been reported to be present after the administration of luteinizing and luteotropic gonadotropins but not after the use of the follicle-stimulating hormone
[25][29].
Then, rats and mice were replaced by rabbits. In the early 1930s, Maurice Harold Friedman, a physician and physiology researcher at the University of Pennsylvania, developed the “rabbit test” (Friedman test) as a modification of the A–Z test
[18][22][22,26]. Friedman proved that a single dose of urine from a pregnant woman was able to induce ovulation in sexually immature female rabbits. Importantly, the test did not require killing the rabbit, which was necessary when tested on mice
[13][14][15][17,18,19]. The technique of the Friedman test was that the female urine was injected intravenously into the female rabbit (into the ear vein) (
Figure 3). If hCG was present, the female rabbit ovulated within 48 h. Evaluation of the ovaries required the opening of the rabbit’s abdomen, which was performed under anesthesia without killing the animal. Blood follicles and fresh corpora lutea were considered positive
[26][27][28][29][30,31,32,33].
Figure 3. Friedman test (rabbit); an illustrative scheme.
The rabbit could be reused for the test after about three weeks, but not more than three times. In practice, however, most rabbits that tested positive were killed after the test
[26][27][28][29][30,31,32,33]. In the days when pregnancy tests were done on rabbits, the colloquial term “rabbit died” was synonymous with “you are pregnant”, which can be found in humorous phrases used to this day.
Interestingly, the accuracy of the A–Z test and the Friedman test was estimated to be around 82.5–99.5%. Both of these tests were used on a massive scale from the 1930s until the early 1960s
[30][31][32][33][34][34,35,36,37,38].
Mice, rats, and rabbits were replaced by frogs. The frog pregnancy test (“frog test”) was developed by the British zoologist Lancelot Hogben. Hogben’s test involved injecting a woman’s urine into an African clawed frog (
Xenopus laevis). It is the only species of toad whose females are sensitive to gonadotropins present in the human urine of pregnant women. If the woman was pregnant, the frog ovulated within 2 to 8 h (
Figure 4A). However, if the urine was from a non-pregnant woman, the frog did not spawn (
Figure 4B)
[22][35][36][37][38][26,39,40,41,42].
Figure 4. Hogben test (frog; Xenopuslaevis); (A) positive result (pregnancy) and (B) negative result (no pregnancy); an illustrative scheme.
In 1947, Carlos Galli Mainini developed a pregnancy test using the male toad
Bufoarenarum Hensel. This toad produced sperm in response to stimulation with chorionic gonadotropin from the urine of a pregnant woman, which was injected into the lymphatic sac of a male toad. The test result was available after three hours, and the assessment was minimally invasive for the animal. Sperm were viewed in urine collected from the toad’s cloaca
[39][40][43,44]. This test has also been adapted to various other species of locally occurring frogs and toads
[41][42][43][45,46,47].
The frog/toad tests had two significant advantages over tests using mice, rats, or rabbits. Most importantly, the frogs could be used for testing many times, and reading the results did not require either killing or opening the animal’s abdominal cavity because the frog’s eggs are secreted to the outside. The second significant advantage of this test was the relatively short waiting time for the result compared to earlier tests using different mammals. The Hogben frog test was the world standard for laboratory pregnancy detection for decades
[35][36][37][38][39,40,41,42].
Interestingly, when biological tests were replaced by immunochemical tests in the 1960s, some hospitals released unwanted African clawed frogs (
Xenopus laevis) imported from South Africa into the wild. According to a study by Vredenburg et al.
[43][47], at that time, a deadly fungus (
Batrachochytrium dendrobatidis) was found in frogs imported from South Africa, which decimated the native American frogs, causing an ecological disaster.
All of these animal tests worked because a pregnant woman’s urine contains a pregnancy-specific hormone, human chorionic gonadotropin (hCG). hCG is made by a woman soon after conception and plays a critical role in the implantation of the embryo into the uterus. However, these bioassays were expensive, sometimes required the killing of animals, and the waiting time for test results was several days. Unfortunately, these tests were also at risk of false positives, especially when tested on female small mammals, due to the similarity between hCG and luteinizing hormone (LH). Most of these bioassays were, in fact, unable to distinguish between the two hormones, except when hCG levels were extremely high
[20][38][44][24,42,48].
2.6. 1960s—Agglutination Tests
Animal bioassays (mice, rats, rabbits, and frogs) were the only practical way to detect pregnancy for four decades. The year 1960 began the era of immunological tests, which also found application in the detection of early pregnancy, allowing the abandonment of animal testing and enabling the development of the technology for rapid, sensitive, and specific home tests
[20][24]. The ability to produce polyclonal antibodies (by immunization of animals) and use them as diagnostic reagents has generally revolutionized laboratory diagnostics and opened up a wide range of possibilities
[45][49].
In the early 1960s, Wide and Gemzell announced the Wide–Gemzell test as an immunological hemagglutination inhibition method for diagnosing pregnancy
[46][50]. The test using the agglutination inhibition test was the first pregnancy test that successfully opened the way for tests for home use
[46][50]. These tests were based on agglutination inhibition reactions of chorionic gonadotropin-coated sheep blood cells (
Figure 5). Since cells were used in the testing process, this test was an immunoassay rather than a bioassay.
Figure 5. Pregnancy test based on the hemagglutination inhibition test. (A) Negative result (no pregnancy); (B) positive result (pregnancy); an illustrative scheme.
This test method detected hCG concentrations of 200 to 300 IU/L, and the analysis time was only 90 min. The test used a suspension of hCG-sensitized sheep erythrocytes preserved in formalin and rabbit anti-hCG antibodies. A suspension of sensitized sheep erythrocytes is combined with test urine and rabbit anti-hCG antibodies. If the test urine does not contain hCG (urine of a non-pregnant woman), anti-hCG causes agglutination of sensitized sheep erythrocytes (
Figure 5A). If the test urine contains hCG (urine of a pregnant woman), anti-hCG antibodies bind to hCG from the urine and do not agglutinate sensitized sheep erythrocytes (
Figure 5B)
[46][47][50,51]. While this test was much faster and less expensive than conventional bioassays, it was relatively insensitive, especially for early pregnancy diagnosis, due to cross-reactivity with various drugs. Problems of this kind have been reported to require careful reading of results, as the presence of certain substances in the urine may result in false-negative or false-positive results. The predictive value (positive and negative) of hemagglutination pregnancy tests was estimated at an even 98%, although not all commercial blood cell tests available at that time were equally accurate
[47][48][51,52]. Urine hemagglutination tests have also been an effective tool for detecting ectopic pregnancy
[49][53].
Almost simultaneously with hemagglutination tests, slide(latex) agglutination tests began to be used, in which sheep blood cells were replaced with latex particles, and the agglutination reaction was performed on glass or plastic (preferably black) plates (
Figure 6A,B). For this reason, these tests are sometimes also called latex agglutination tests
[50][51][52][54,55,56]. Nevertheless, hemagglutination tests and slide (latex) agglutination tests turned out to be at least as reliable in detecting pregnancy as biological tests, which were common at the time, and successfully replaced them
[47][50][51][52][51,54,55,56].
Figure 6. Pregnancy test based on the latex agglutination inhibition (slide) test. (A) Negative result (no pregnancy); (B) positive result (pregnancy); illustrative scheme.
2.7. 1970s—Time of the Radioimmunoassay
In the 1970s, the structure of the hCG molecule was precisely determined. This allowed the construction of immunological tests unequivocally distinguishing hCG from luteinizing hormone (LH)
[53][54][55][56][57,58,59,60]. Further research (1980s/1990s) into the effects of hormones on reproduction led to the development of a way to identify and measure hCG
[57][58][61,62]. This knowledge, along with the development of immunology and immunochemistry, has become the basis for the construction of specific pregnancy tests that can clearly detect pregnancy in its early stages.
In 1966, Midgley first measured hCG and luteinizing hormone using a radioimmunoassay that ranged from 25 IU/L to 5000 IU/L
[59][63]. Early RIA techniques were unable to distinguish between LH and hCG due to their cross-reactivity with specific antibodies. In order to overcome the cross-reaction between LH and hCG, the assay was performed at a concentration that did not interfere with LH, but the sensitivity of hCG deteriorated as a result.
In 1972, Vaitukaitis et al.
[60][64] published their paper describing the hCG beta-subunit radioimmunoassay that could finally distinguish between hCG and LH, therefore making it potentially useful as an early test for pregnancy. Antiserum directed against the beta-subunit of human chorionic gonadotropin (β-hCG) was used in the assay. With this strategy, it was possible to selectively measure hCG in samples containing both human pituitary luteinizing hormone (hLH) and hCG. The high levels of hLH observed in samples taken during the luteinizing phase of the menstrual cycle or from castrated patients had no effect on the specific detection of β-hCG by this radioimmunoassay (
Figure 7A,B)
[60][64].
Figure 7. Radioimmunoassay for pregnancy; (A) positive result (pregnancy) and (B) negative result (no pregnancy); an illustrative scheme.
The sensitivity and specificity of this test have been validated by other researchers. The test has been found to be of particular value in those conditions where hCG concentrations, as measured by previously described assays, will be affected by circulating hLH levels, i.e., early implantation, unruptured ectopic pregnancy, threatened abortion, and trophoblastic disease during chemotherapy
[61][62][65,66]. The RIAs for the detection of the beta-subunit of chorionic gonadotropin performed well in urine, plasma, and serum
[63][64][67,68]. Clinical specificity has been estimated at 99%, and clinical sensitivity in the range of 89–96% for serum measurements
[64][68]. In 1974, Saxena et al.
[65][69] developed a radio receptor test (RRA) for determining the beta-subunit of hCG. The sensitivity of this test was 5 IU/L, which made it possible to detect pregnancy already on the 6th–8th day of its duration, and the analysis time was reduced to 1 h. It is worth noting that a significant increase in the sensitivity and specificity of pregnancy tests was made possible only in the 1980s by the introduction of monoclonal antibodies for the hCG beta-subunit instead of the previously used polyclonal antibodies
[66][70].
From a technical point of view, it is very important that the tests developed by Vaitukaitis et al.
[60][64] or Saxena et al.
[65][69] were based on the detection of the intensity of ionizing radiation. Radioimmunoassays (RIAs), radioimmunometric assays (IRMAs), or radioreceptor assays (RRAs) are immunoassays that use radioactively labeled molecules to detect immune complexes formed during the antigen/antibody reaction
[67][68][71,72]. The era of RIA testing in laboratory diagnostics began in 1959. The first immunological technique was the isotopic dilution radioimmunoassay developed by Rosalyn Yalow and Solomon Berson to detect insulin in human blood. For this achievement, Yalow was awarded the Nobel Prize in Physiology or Medicine in 1977. In the 1960s and 1970s, the RIA became an important tool in biological research
[68][72]. Radioimmunoassays using antibodies labeled with radioisotopes have revolutionized laboratory diagnostics and enabled the detection of various biological substances (including, for example, hormones), even at very low concentrations (
Figure 7). Unfortunately, as research that was dangerous and required special conditions protecting against radiation, they could not leave the laboratory; it was definitely not a home testing strategy.
The beginnings of immunochemical techniques based on the use of antibodies labeled with enzymes date back to the early 1970s
[69][73]. However, it was not until the 1980s that durable and stable immunological reagents were obtained that could be successfully used in standard laboratory techniques
[70][74]. These techniques revolutionized the entire area of immunological diagnostics and gradually replaced radioimmunoassays. Currently, most immunochemical measurement techniques use sandwich and competitive immunoenzymatic methods with colorimetric or luminescence detection (
Figure 8).
Figure 8.
Immunochemical techniques in pregnancy tests; (
A
) positive result (pregnancy); (
B
) negative result (no pregnancy); an illustrative scheme.
At that time, sensitive and specific immunochemical techniques were also developed, based on the standard ELISA (Enzyme-linked Immunosorbent Assay), using anti-β-hCG monoclonal antibodies. The lower limit of detection was 0.2 μg/L after a 2 h incubation with serum sample at room temperature and a 30 min incubation with enzyme substrate at room temperature after washing away excess enzyme conjugate. Within-assay precision was less than 6%
[71][75].