id
int32 0
100k
| text
stringlengths 21
3.54k
| source
stringlengths 1
124
| similarity
float32 0.79
0.89
|
|---|---|---|---|
200
|
Discrete probability theory deals with events that occur in countable sample spaces. Examples: Throwing dice, experiments with decks of cards, random walk, and tossing coins. Classical definition: Initially the probability of an event to occur was defined as the number of cases favorable for the event, over the number of total outcomes possible in an equiprobable sample space: see Classical definition of probability. For example, if the event is "occurrence of an even number when a dice is rolled", the probability is given by 3 6 = 1 2 {\displaystyle {\tfrac {3}{6}}={\tfrac {1}{2}}} , since 3 faces out of the 6 have even numbers and each face has the same probability of appearing.
|
Mathematical probability
| 0.855788
|
201
|
Common intuition suggests that if a fair coin is tossed many times, then roughly half of the time it will turn up heads, and the other half it will turn up tails. Furthermore, the more often the coin is tossed, the more likely it should be that the ratio of the number of heads to the number of tails will approach unity. Modern probability theory provides a formal version of this intuitive idea, known as the law of large numbers. This law is remarkable because it is not assumed in the foundations of probability theory, but instead emerges from these foundations as a theorem.
|
Mathematical probability
| 0.855788
|
202
|
Calcium ions (Ca2+) contribute to the physiology and biochemistry of organisms' cells. They play an important role in signal transduction pathways, where they act as a second messenger, in neurotransmitter release from neurons, in contraction of all muscle cell types, and in fertilization. Many enzymes require calcium ions as a cofactor, including several of the coagulation factors. Extracellular calcium is also important for maintaining the potential difference across excitable cell membranes, as well as proper bone formation.
|
Calcium in biology
| 0.855787
|
203
|
In 1859, not knowing of Stewarts work, Gustav Robert Kirchhoff reported the coincidence of the wavelengths of spectrally resolved lines of absorption and of emission of visible light. Importantly for thermal physics, he also observed that bright lines or dark lines were apparent depending on the temperature difference between emitter and absorber.Kirchhoff then went on to consider some bodies that emit and absorb heat radiation, in an opaque enclosure or cavity, in equilibrium at a temperature T. Here is used a notation different from Kirchhoff's. Here, the emitting power E(T, i) denotes a dimensioned quantity, the total radiation emitted by a body labeled by index i at temperature T. The total absorption ratio a(T, i) of that body is dimensionless, the ratio of absorbed to incident radiation in the cavity at temperature T . (In contrast with Balfour Stewart's, Kirchhoff's definition of his absorption ratio did not refer in particular to a lamp-black surface as the source of the incident radiation.)
|
Black body spectrum
| 0.855762
|
204
|
These quanta were called photons and the blackbody cavity was thought of as containing a gas of photons. In addition, it led to the development of quantum probability distributions, called Fermi–Dirac statistics and Bose–Einstein statistics, each applicable to a different class of particles, fermions and bosons. The wavelength at which the radiation is strongest is given by Wien's displacement law, and the overall power emitted per unit area is given by the Stefan–Boltzmann law.
|
Black body spectrum
| 0.855762
|
205
|
Calculating the blackbody curve was a major challenge in theoretical physics during the late nineteenth century. The problem was solved in 1901 by Max Planck in the formalism now known as Planck's law of blackbody radiation. By making changes to Wien's radiation law (not to be confused with Wien's displacement law) consistent with thermodynamics and electromagnetism, he found a mathematical expression fitting the experimental data satisfactorily.
|
Black body spectrum
| 0.855762
|
206
|
According to the Classical Theory of Radiation, if each Fourier mode of the equilibrium radiation (in an otherwise empty cavity with perfectly reflective walls) is considered as a degree of freedom capable of exchanging energy, then, according to the equipartition theorem of classical physics, there would be an equal amount of energy in each mode. Since there are an infinite number of modes, this would imply infinite heat capacity, as well as a nonphysical spectrum of emitted radiation that grows without bound with increasing frequency, a problem known as the ultraviolet catastrophe. In the longer wavelengths this deviation is not so noticeable, as h ν {\displaystyle h\nu } and n h ν {\displaystyle nh\nu } are very small. In the shorter wavelengths of the ultraviolet range, however, classical theory predicts the energy emitted tends to infinity, hence the ultraviolet catastrophe.
|
Black body spectrum
| 0.855762
|
207
|
When flattened to a one-form, A can be decomposed via the Hodge decomposition theorem as the sum of an exact, a coexact, and a harmonic form, A = d α + δ β + γ {\displaystyle A=d\alpha +\delta \beta +\gamma } .There is gauge freedom in A in that of the three forms in this decomposition, only the coexact form has any effect on the electromagnetic tensor F = d A {\displaystyle F=dA} .Exact forms are closed, as are harmonic forms over an appropriate domain, so d d α = 0 {\displaystyle dd\alpha =0} and d γ = 0 {\displaystyle d\gamma =0} , always. So regardless of what α {\displaystyle \alpha } and γ {\displaystyle \gamma } are, we are left with simply F = d δ β {\displaystyle F=d\delta \beta } .
|
Electromagnetic potential
| 0.855622
|
208
|
Beckmann's version of this story has been widely copied in several books and internet sites, usually without his reservations and sometimes with fanciful embellishments. Several attempts to find corroborating evidence for this story, or even for the existence of Valmes, have failed.The proof that four is the highest degree of a general polynomial for which such solutions can be found was first given in the Abel–Ruffini theorem in 1824, proving that all attempts at solving the higher order polynomials would be futile. The notes left by Évariste Galois prior to dying in a duel in 1832 later led to an elegant complete theory of the roots of polynomials, of which this theorem was one result.
|
Fourth-degree equation
| 0.855622
|
209
|
Lodovico Ferrari is credited with the discovery of the solution to the quartic in 1540, but since this solution, like all algebraic solutions of the quartic, requires the solution of a cubic to be found, it could not be published immediately. The solution of the quartic was published together with that of the cubic by Ferrari's mentor Gerolamo Cardano in the book Ars Magna.The Soviet historian I. Y. Depman (ru) claimed that even earlier, in 1486, Spanish mathematician Valmes was burned at the stake for claiming to have solved the quartic equation. Inquisitor General Tomás de Torquemada allegedly told Valmes that it was the will of God that such a solution be inaccessible to human understanding. However, Petr Beckmann, who popularized this story of Depman in the West, said that it was unreliable and hinted that it may have been invented as Soviet antireligious propaganda.
|
Fourth-degree equation
| 0.855622
|
210
|
Bioinformatics tools exist to assist with interpretation of mass spectra (see de novo peptide sequencing), to compare or analyze protein sequences (see sequence analysis), or search databases using peptide or protein sequences (see BLAST).
|
Protein sequencing
| 0.855612
|
211
|
To circumvent this problem, Biochemistry Online suggests heating separate samples for different times, analysing each resulting solution, and extrapolating back to zero hydrolysis time. Rastall suggests a variety of reagents to prevent or reduce degradation, such as thiol reagents or phenol to protect tryptophan and tyrosine from attack by chlorine, and pre-oxidising cysteine. He also suggests measuring the quantity of ammonia evolved to determine the extent of amide hydrolysis.
|
Protein sequencing
| 0.855612
|
212
|
Though it is now regarded as pseudoscience, belief in a mystical significance of numbers, known as numerology, permeated ancient and medieval thought. Numerology heavily influenced the development of Greek mathematics, stimulating the investigation of many problems in number theory which are still of interest today.During the 19th century, mathematicians began to develop many different abstractions which share certain properties of numbers, and may be seen as extending the concept. Among the first were the hypercomplex numbers, which consist of various extensions or modifications of the complex number system. In modern mathematics, number systems are considered important special examples of more general algebraic structures such as rings and fields, and the application of the term "number" is a matter of convention, without fundamental significance.
|
Numerical value
| 0.855394
|
213
|
Their study or usage is called arithmetic, a term which may also refer to number theory, the study of the properties of numbers. Besides their practical uses, numbers have cultural significance throughout the world. For example, in Western society, the number 13 is often regarded as unlucky, and "a million" may signify "a lot" rather than an exact quantity.
|
Numerical value
| 0.855394
|
214
|
The fundamental theorem of algebra asserts that the complex numbers form an algebraically closed field, meaning that every polynomial with complex coefficients has a root in the complex numbers. Like the reals, the complex numbers form a field, which is complete, but unlike the real numbers, it is not ordered. That is, there is no consistent meaning assignable to saying that i is greater than 1, nor is there any meaning in saying that i is less than 1. In technical terms, the complex numbers lack a total order that is compatible with field operations.
|
Numerical value
| 0.855394
|
215
|
Secondary structure prediction is a set of techniques in bioinformatics that aim to predict the local secondary structures of proteins based only on knowledge of their amino acid sequence. For proteins, a prediction consists of assigning regions of the amino acid sequence as likely alpha helices, beta strands (often noted as "extended" conformations), or turns. The success of a prediction is determined by comparing it to the results of the DSSP algorithm (or similar e.g. STRIDE) applied to the crystal structure of the protein. Specialized algorithms have been developed for the detection of specific well-defined patterns such as transmembrane helices and coiled coils in proteins.The best modern methods of secondary structure prediction in proteins were claimed to reach 80% accuracy after using machine learning and sequence alignments; this high accuracy allows the use of the predictions as feature improving fold recognition and ab initio protein structure prediction, classification of structural motifs, and refinement of sequence alignments. The accuracy of current protein secondary structure prediction methods is assessed in weekly benchmarks such as LiveBench and EVA.
|
Protein folding problem
| 0.855157
|
216
|
These groups can therefore interact in the protein structure. Proteins consist mostly of 20 different types of L-α-amino acids (the proteinogenic amino acids). These can be classified according to the chemistry of the side chain, which also plays an important structural role.
|
Protein folding problem
| 0.855157
|
217
|
These methods use rotamer libraries, which are collections of favorable conformations for each residue type in proteins. Rotamer libraries may contain information about the conformation, its frequency, and the standard deviations about mean dihedral angles, which can be used in sampling. Rotamer libraries are derived from structural bioinformatics or other statistical analysis of side-chain conformations in known experimental structures of proteins, such as by clustering the observed conformations for tetrahedral carbons near the staggered (60°, 180°, -60°) values.
|
Protein folding problem
| 0.855157
|
218
|
Accurate packing of the amino acid side chains represents a separate problem in protein structure prediction. Methods that specifically address the problem of predicting side-chain geometry include dead-end elimination and the self-consistent mean field methods. The side chain conformations with low energy are usually determined on the rigid polypeptide backbone and using a set of discrete side chain conformations known as "rotamers." The methods attempt to identify the set of rotamers that minimize the model's overall energy.
|
Protein folding problem
| 0.855157
|
219
|
Ab initio- or de novo- protein modelling methods seek to build three-dimensional protein models "from scratch", i.e., based on physical principles rather than (directly) on previously solved structures. There are many possible procedures that either attempt to mimic protein folding or apply some stochastic method to search possible solutions (i.e., global optimization of a suitable energy function). These procedures tend to require vast computational resources, and have thus only been carried out for tiny proteins. To predict protein structure de novo for larger proteins will require better algorithms and larger computational resources like those afforded by either powerful supercomputers (such as Blue Gene or MDGRAPE-3) or distributed computing (such as Folding@home, the Human Proteome Folding Project and Rosetta@Home).
|
Protein folding problem
| 0.855157
|
220
|
As a counter-example, considering the non-square-free n=60, the greatest common divisor of 30 and its complement 2 would be 2, while it should be the bottom element 1. Other examples of Boolean algebras arise from topological spaces: if X is a topological space, then the collection of all subsets of X which are both open and closed forms a Boolean algebra with the operations ∨ := ∪ (union) and ∧ := ∩ (intersection). If R {\displaystyle R} is an arbitrary ring then its set of central idempotents, which is the set becomes a Boolean algebra when its operations are defined by e ∨ f := e + f − e f {\displaystyle e\vee f:=e+f-ef} and e ∧ f := e f . {\displaystyle e\wedge f:=ef.}
|
Boolean algebra (structure)
| 0.855125
|
221
|
This lattice is a Boolean algebra if and only if n is square-free. The bottom and the top element of this Boolean algebra is the natural number 1 and n, respectively. The complement of a is given by n/a.
|
Boolean algebra (structure)
| 0.855125
|
222
|
A truth assignment in propositional calculus is then a Boolean algebra homomorphism from this algebra to the two-element Boolean algebra. Given any linearly ordered set L with a least element, the interval algebra is the smallest algebra of subsets of L containing all of the half-open intervals [a, b) such that a is in L and b is either in L or equal to ∞. Interval algebras are useful in the study of Lindenbaum–Tarski algebras; every countable Boolean algebra is isomorphic to an interval algebra.For any natural number n, the set of all positive divisors of n, defining a ≤ b {\displaystyle a\leq b} if a divides b, forms a distributive lattice.
|
Boolean algebra (structure)
| 0.855125
|
223
|
Starting with the propositional calculus with κ sentence symbols, form the Lindenbaum algebra (that is, the set of sentences in the propositional calculus modulo logical equivalence). This construction yields a Boolean algebra. It is in fact the free Boolean algebra on κ generators.
|
Boolean algebra (structure)
| 0.855125
|
224
|
This can for example be used to show that the following laws (Consensus theorems) are generally valid in all Boolean algebras: (a ∨ b) ∧ (¬a ∨ c) ∧ (b ∨ c) ≡ (a ∨ b) ∧ (¬a ∨ c) (a ∧ b) ∨ (¬a ∧ c) ∨ (b ∧ c) ≡ (a ∧ b) ∨ (¬a ∧ c)The power set (set of all subsets) of any given nonempty set S forms a Boolean algebra, an algebra of sets, with the two operations ∨ := ∪ (union) and ∧ := ∩ (intersection). The smallest element 0 is the empty set and the largest element 1 is the set S itself.After the two-element Boolean algebra, the simplest Boolean algebra is that defined by the power set of two atoms:The set A {\displaystyle A} of all subsets of S {\displaystyle S} that are either finite or cofinite is a Boolean algebra and an algebra of sets called the finite–cofinite algebra. If S {\displaystyle S} is infinite then the set of all cofinite subsets of S , {\displaystyle S,} which is called the Fréchet filter, is a free ultrafilter on A .
|
Boolean algebra (structure)
| 0.855125
|
225
|
The simplest non-trivial Boolean algebra, the two-element Boolean algebra, has only two elements, 0 and 1, and is defined by the rules:It has applications in logic, interpreting 0 as false, 1 as true, ∧ as and, ∨ as or, and ¬ as not. Expressions involving variables and the Boolean operations represent statement forms, and two such expressions can be shown to be equal using the above axioms if and only if the corresponding statement forms are logically equivalent.The two-element Boolean algebra is also used for circuit design in electrical engineering; here 0 and 1 represent the two different states of one bit in a digital circuit, typically high and low voltage. Circuits are described by expressions containing variables, and two such expressions are equal for all values of the variables if and only if the corresponding circuits have the same input-output behavior. Furthermore, every possible input-output behavior can be modeled by a suitable Boolean expression.The two-element Boolean algebra is also important in the general theory of Boolean algebras, because an equation involving several variables is generally true in all Boolean algebras if and only if it is true in the two-element Boolean algebra (which can be checked by a trivial brute force algorithm for small numbers of variables).
|
Boolean algebra (structure)
| 0.855125
|
226
|
A filter of the Boolean algebra A is a subset p such that for all x, y in p we have x ∧ y in p and for all a in A we have a ∨ x in p. The dual of a maximal (or prime) ideal in a Boolean algebra is ultrafilter. Ultrafilters can alternatively be described as 2-valued morphisms from A to the two-element Boolean algebra. The statement every filter in a Boolean algebra can be extended to an ultrafilter is called the Ultrafilter Theorem and cannot be proven in ZF, if ZF is consistent. Within ZF, it is strictly weaker than the axiom of choice. The Ultrafilter Theorem has many equivalent formulations: every Boolean algebra has an ultrafilter, every ideal in a Boolean algebra can be extended to a prime ideal, etc.
|
Boolean algebra (structure)
| 0.855125
|
227
|
An ideal of the Boolean algebra A is a subset I such that for all x, y in I we have x ∨ y in I and for all a in A we have a ∧ x in I. This notion of ideal coincides with the notion of ring ideal in the Boolean ring A. An ideal I of A is called prime if I ≠ A and if a ∧ b in I always implies a in I or b in I. Furthermore, for every a ∈ A we have that a ∧ −a = 0 ∈ I and then a ∈ I or −a ∈ I for every a ∈ A, if I is prime. An ideal I of A is called maximal if I ≠ A and if the only ideal properly containing I is A itself. For an ideal I, if a ∉ I and −a ∉ I, then I ∪ a or I ∪ {−a} is properly contained in another ideal J. Hence, that an I is not maximal and therefore the notions of prime ideal and maximal ideal are equivalent in Boolean algebras. Moreover, these notions coincide with ring theoretic ones of prime ideal and maximal ideal in the Boolean ring A. The dual of an ideal is a filter.
|
Boolean algebra (structure)
| 0.855125
|
228
|
It follows from the first five pairs of axioms that any complement is unique. The set of axioms is self-dual in the sense that if one exchanges ∨ with ∧ and 0 with 1 in an axiom, the result is again an axiom. Therefore, by applying this operation to a Boolean algebra (or Boolean lattice), one obtains another Boolean algebra with the same elements; it is called its dual.
|
Boolean algebra (structure)
| 0.855125
|
229
|
A Boolean algebra is a set A, equipped with two binary operations ∧ (called "meet" or "and"), ∨ (called "join" or "or"), a unary operation ¬ (called "complement" or "not") and two elements 0 and 1 in A (called "bottom" and "top", or "least" and "greatest" element, also denoted by the symbols ⊥ and ⊤, respectively), such that for all elements a, b and c of A, the following axioms hold: Note, however, that the absorption law and even the associativity law can be excluded from the set of axioms as they can be derived from the other axioms (see Proven properties). A Boolean algebra with only one element is called a trivial Boolean algebra or a degenerate Boolean algebra. (In older works, some authors required 0 and 1 to be distinct elements in order to exclude this case.
|
Boolean algebra (structure)
| 0.855125
|
230
|
In accelerator physics, a kinematically complete experiment is an experiment in which all kinematic parameters of all collision products are determined. If the final state of the collision involves n particles 3n momentum components (3 Cartesian coordinates for each particle) need to be determined. However, these components are linked to each other by momentum conservation in each direction (3 equations) and energy conservation (1 equation) so that only 3n-4 components are linearly independent. Therefore, the measurement of 3n-4 momentum components constitutes a kinematically complete experiment.
|
Kinematically complete experiment
| 0.855008
|
231
|
As well as discrete metric spaces, there are more general discrete topological spaces, finite metric spaces, finite topological spaces. The time scale calculus is a unification of the theory of difference equations with that of differential equations, which has applications to fields requiring simultaneous modelling of discrete and continuous data. Another way of modeling such a situation is the notion of hybrid dynamical systems.
|
Discrete math
| 0.854955
|
232
|
Many questions and methods concerning differential equations have counterparts for difference equations. For instance, where there are integral transforms in harmonic analysis for studying continuous functions or analogue signals, there are discrete transforms for discrete functions or digital signals.
|
Discrete math
| 0.854955
|
233
|
In discrete calculus and the calculus of finite differences, a function defined on an interval of the integers is usually called a sequence. A sequence could be a finite sequence from a data source or an infinite sequence from a discrete dynamical system. Such a discrete function could be defined explicitly by a list (if its domain is finite), or by a formula for its general term, or it could be given implicitly by a recurrence relation or difference equation. Difference equations are similar to differential equations, but replace differentiation by taking the difference between adjacent terms; they can be used to approximate differential equations or (more often) studied in their own right.
|
Discrete math
| 0.854955
|
234
|
They can model many types of relations and process dynamics in physical, biological and social systems. In computer science, they can represent networks of communication, data organization, computational devices, the flow of computation, etc. In mathematics, they are useful in geometry and certain parts of topology, e.g. knot theory. Algebraic graph theory has close links with group theory and topological graph theory has close links to topology. There are also continuous graphs; however, for the most part, research in graph theory falls within the domain of discrete mathematics.
|
Discrete math
| 0.854955
|
235
|
Graph theory, the study of graphs and networks, is often considered part of combinatorics, but has grown large enough and distinct enough, with its own kind of problems, to be regarded as a subject in its own right. Graphs are one of the prime objects of study in discrete mathematics. They are among the most ubiquitous models of both natural and human-made structures.
|
Discrete math
| 0.854955
|
236
|
In university curricula, discrete mathematics appeared in the 1980s, initially as a computer science support course; its contents were somewhat haphazard at the time. The curriculum has thereafter developed in conjunction with efforts by ACM and MAA into a course that is basically intended to develop mathematical maturity in first-year students; therefore, it is nowadays a prerequisite for mathematics majors in some universities as well. Some high-school-level discrete mathematics textbooks have appeared as well. At this level, discrete mathematics is sometimes seen as a preparatory course, not unlike precalculus in this respect.The Fulkerson Prize is awarded for outstanding papers in discrete mathematics.
|
Discrete math
| 0.854955
|
237
|
Concepts and notations from discrete mathematics are useful in studying and describing objects and problems in branches of computer science, such as computer algorithms, programming languages, cryptography, automated theorem proving, and software development. Conversely, computer implementations are significant in applying ideas from discrete mathematics to real-world problems. Although the main objects of study in discrete mathematics are discrete objects, analytic methods from "continuous" mathematics are often employed as well.
|
Discrete math
| 0.854955
|
238
|
However, there is no exact definition of the term "discrete mathematics".The set of objects studied in discrete mathematics can be finite or infinite. The term finite mathematics is sometimes applied to parts of the field of discrete mathematics that deals with finite sets, particularly those areas relevant to business. Research in discrete mathematics increased in the latter half of the twentieth century partly due to the development of digital computers which operate in "discrete" steps and store data in "discrete" bits.
|
Discrete math
| 0.854955
|
239
|
Discrete mathematics is the study of mathematical structures that can be considered "discrete" (in a way analogous to discrete variables, having a bijection with the set of natural numbers) rather than "continuous" (analogously to continuous functions). Objects studied in discrete mathematics include integers, graphs, and statements in logic. By contrast, discrete mathematics excludes topics in "continuous mathematics" such as real numbers, calculus or Euclidean geometry. Discrete objects can often be enumerated by integers; more formally, discrete mathematics has been characterized as the branch of mathematics dealing with countable sets (finite sets or sets with the same cardinality as the natural numbers).
|
Discrete math
| 0.854955
|
240
|
Number theory is concerned with the properties of numbers in general, particularly integers. It has applications to cryptography and cryptanalysis, particularly with regard to modular arithmetic, diophantine equations, linear and quadratic congruences, prime numbers and primality testing. Other discrete aspects of number theory include geometry of numbers. In analytic number theory, techniques from continuous mathematics are also used. Topics that go beyond discrete objects include transcendental numbers, diophantine approximation, p-adic analysis and function fields.
|
Discrete math
| 0.854955
|
241
|
Modern Molecular phylogenetics largely ignores morphological characters, relying on DNA sequences as data. Molecular analysis of DNA sequences from most families of flowering plants enabled the Angiosperm Phylogeny Group to publish in 1998 a phylogeny of flowering plants, answering many of the questions about relationships among angiosperm families and species. The theoretical possibility of a practical method for identification of plant species and commercial varieties by DNA barcoding is the subject of active current research.
|
Plant biology
| 0.854866
|
242
|
20th century developments in plant biochemistry have been driven by modern techniques of organic chemical analysis, such as spectroscopy, chromatography and electrophoresis. With the rise of the related molecular-scale biological approaches of molecular biology, genomics, proteomics and metabolomics, the relationship between the plant genome and most aspects of the biochemistry, physiology, morphology and behaviour of plants can be subjected to detailed experimental analysis. The concept originally stated by Gottlieb Haberlandt in 1902 that all plant cells are totipotent and can be grown in vitro ultimately enabled the use of genetic engineering experimentally to knock out a gene or genes responsible for a specific trait, or to add genes such as GFP that report when a gene of interest is being expressed.
|
Plant biology
| 0.854866
|
243
|
Building on the extensive earlier work of Alphonse de Candolle, Nikolai Vavilov (1887–1943) produced accounts of the biogeography, centres of origin, and evolutionary history of economic plants.Particularly since the mid-1960s there have been advances in understanding of the physics of plant physiological processes such as transpiration (the transport of water within plant tissues), the temperature dependence of rates of water evaporation from the leaf surface and the molecular diffusion of water vapour and carbon dioxide through stomatal apertures. These developments, coupled with new methods for measuring the size of stomatal apertures, and the rate of photosynthesis have enabled precise description of the rates of gas exchange between plants and the atmosphere. Innovations in statistical analysis by Ronald Fisher, Frank Yates and others at Rothamsted Experimental Station facilitated rational experimental design and data analysis in botanical research.
|
Plant biology
| 0.854866
|
244
|
The discipline of plant ecology was pioneered in the late 19th century by botanists such as Eugenius Warming, who produced the hypothesis that plants form communities, and his mentor and successor Christen C. Raunkiær whose system for describing plant life forms is still in use today. The concept that the composition of plant communities such as temperate broadleaf forest changes by a process of ecological succession was developed by Henry Chandler Cowles, Arthur Tansley and Frederic Clements. Clements is credited with the idea of climax vegetation as the most complex vegetation that an environment can support and Tansley introduced the concept of ecosystems to biology.
|
Plant biology
| 0.854866
|
245
|
Building upon the gene-chromosome theory of heredity that originated with Gregor Mendel (1822–1884), August Weismann (1834–1914) proved that inheritance only takes place through gametes. No other cells can pass on inherited characters. The work of Katherine Esau (1898–1997) on plant anatomy is still a major foundation of modern botany. Her books Plant Anatomy and Anatomy of Seed Plants have been key plant structural biology texts for more than half a century.
|
Plant biology
| 0.854866
|
246
|
The single celled green alga Chlamydomonas reinhardtii, while not an embryophyte itself, contains a green-pigmented chloroplast related to that of land plants, making it useful for study. A red alga Cyanidioschyzon merolae has also been used to study some basic chloroplast functions. Spinach, peas, soybeans and a moss Physcomitrella patens are commonly used to study plant cell biology.Agrobacterium tumefaciens, a soil rhizosphere bacterium, can attach to plant cells and infect them with a callus-inducing Ti plasmid by horizontal gene transfer, causing a callus infection called crown gall disease. Schell and Van Montagu (1977) hypothesised that the Ti plasmid could be a natural vector for introducing the Nif gene responsible for nitrogen fixation in the root nodules of legumes and other plant species. Today, genetic modification of the Ti plasmid is one of the main techniques for introduction of transgenes to plants and the creation of genetically modified crops.
|
Plant biology
| 0.854866
|
247
|
Model plants such as Arabidopsis thaliana are used for studying the molecular biology of plant cells and the chloroplast. Ideally, these organisms have small genomes that are well known or completely sequenced, small stature and short generation times. Corn has been used to study mechanisms of photosynthesis and phloem loading of sugar in C4 plants.
|
Plant biology
| 0.854866
|
248
|
A considerable amount of new knowledge about plant function comes from studies of the molecular genetics of model plants such as the Thale cress, Arabidopsis thaliana, a weedy species in the mustard family (Brassicaceae). The genome or hereditary information contained in the genes of this species is encoded by about 135 million base pairs of DNA, forming one of the smallest genomes among flowering plants. Arabidopsis was the first plant to have its genome sequenced, in 2000. The sequencing of some other relatively small genomes, of rice (Oryza sativa) and Brachypodium distachyon, has made them important model species for understanding the genetics, cellular and molecular biology of cereals, grasses and monocots generally.
|
Plant biology
| 0.854866
|
249
|
The finding in 1939 that plant callus could be maintained in culture containing IAA, followed by the observation in 1947 that it could be induced to form roots and shoots by controlling the concentration of growth hormones were key steps in the development of plant biotechnology and genetic modification. Cytokinins are a class of plant hormones named for their control of cell division (especially cytokinesis). The natural cytokinin zeatin was discovered in corn, Zea mays, and is a derivative of the purine adenine.
|
Plant biology
| 0.854866
|
250
|
Plant responses to climate and other environmental changes can inform our understanding of how these changes affect ecosystem function and productivity. For example, plant phenology can be a useful proxy for temperature in historical climatology, and the biological impact of climate change and global warming. Palynology, the analysis of fossil pollen deposits in sediments from thousands or millions of years ago allows the reconstruction of past climates. Estimates of atmospheric CO2 concentrations since the Palaeozoic have been obtained from stomatal densities and the leaf shapes and sizes of ancient land plants. Ozone depletion can expose plants to higher levels of ultraviolet radiation-B (UV-B), resulting in lower growth rates. Moreover, information from studies of community ecology, plant systematics, and taxonomy is essential to understanding vegetation change, habitat destruction and species extinction.
|
Plant biology
| 0.854866
|
251
|
Plant biochemistry is the study of the chemical processes used by plants. Some of these processes are used in their primary metabolism like the photosynthetic Calvin cycle and crassulacean acid metabolism. Others make specialised materials like the cellulose and lignin used to build their bodies, and secondary products like resins and aroma compounds.
|
Plant biology
| 0.854866
|
252
|
Cell Calcium is a monthly peer-reviewed scientific journal published by Elsevier that covers the field of cell biology and focuses mainly on calcium signalling and metabolism in living organisms.
|
Cell Calcium
| 0.854799
|
253
|
In biochemistry, a hypothetical protein is a protein whose existence has been predicted, but for which there is a lack of experimental evidence that it is expressed in vivo. Sequencing of several genomes has resulted in numerous predicted open reading frames to which functions cannot be readily assigned. These proteins, either orphan or conserved hypothetical proteins, make up ~ 20% to 40% of proteins encoded in each newly sequenced genome.
|
Hypothetical protein
| 0.854764
|
254
|
{\displaystyle s.} Since the signatures that arise in algebra often contain only function symbols, a signature with no relation symbols is called an algebraic signature. A structure with such a signature is also called an algebra; this should not be confused with the notion of an algebra over a field.
|
Structure (mathematical logic)
| 0.85465
|
255
|
These techniques allowed for the discovery and detailed analysis of many molecules and metabolic pathways of the cell, such as glycolysis and the Krebs cycle (citric acid cycle), and led to an understanding of biochemistry on a molecular level. Another significant historic event in biochemistry is the discovery of the gene, and its role in the transfer of information in the cell. In the 1950s, James D. Watson, Francis Crick, Rosalind Franklin and Maurice Wilkins were instrumental in solving DNA structure and suggesting its relationship with the genetic transfer of information.
|
Biochemistry
| 0.854551
|
256
|
In 1828, Friedrich Wöhler published a paper on his serendipitous urea synthesis from potassium cyanate and ammonium sulfate; some regarded that as a direct overthrow of vitalism and the establishment of organic chemistry. However, the Wöhler synthesis has sparked controversy as some reject the death of vitalism at his hands. Since then, biochemistry has advanced, especially since the mid-20th century, with the development of new techniques such as chromatography, X-ray diffraction, dual polarisation interferometry, NMR spectroscopy, radioisotopic labeling, electron microscopy and molecular dynamics simulations.
|
Biochemistry
| 0.854551
|
257
|
In 1877, Felix Hoppe-Seyler used the term (biochemie in German) as a synonym for physiological chemistry in the foreword to the first issue of Zeitschrift für Physiologische Chemie (Journal of Physiological Chemistry) where he argued for the setting up of institutes dedicated to this field of study. The German chemist Carl Neuberg however is often cited to have coined the word in 1903, while some credited it to Franz Hofmeister. It was once generally believed that life and its materials had some essential property or substance (often referred to as the "vital principle") distinct from any found in non-living matter, and it was thought that only living beings could produce the molecules of life.
|
Biochemistry
| 0.854551
|
258
|
Some might also point as its beginning to the influential 1842 work by Justus von Liebig, Animal chemistry, or, Organic chemistry in its applications to physiology and pathology, which presented a chemical theory of metabolism, or even earlier to the 18th century studies on fermentation and respiration by Antoine Lavoisier. Many other pioneers in the field who helped to uncover the layers of complexity of biochemistry have been proclaimed founders of modern biochemistry. Emil Fischer, who studied the chemistry of proteins, and F. Gowland Hopkins, who studied enzymes and the dynamic nature of biochemistry, represent two examples of early biochemists.The term "biochemistry" was first used when Vinzenz Kletzinsky (1826–1882) had his "Compendium der Biochemie" printed in Vienna in 1858; it derived from a combination of biology and chemistry.
|
Biochemistry
| 0.854551
|
259
|
At its most comprehensive definition, biochemistry can be seen as a study of the components and composition of living things and how they come together to become life. In this sense, the history of biochemistry may therefore go back as far as the ancient Greeks. However, biochemistry as a specific scientific discipline began sometime in the 19th century, or a little earlier, depending on which aspect of biochemistry is being focused on. Some argued that the beginning of biochemistry may have been the discovery of the first enzyme, diastase (now called amylase), in 1833 by Anselme Payen, while others considered Eduard Buchner's first demonstration of a complex biochemical process alcoholic fermentation in cell-free extracts in 1897 to be the birth of biochemistry.
|
Biochemistry
| 0.854551
|
260
|
The 4 main classes of molecules in biochemistry (often called biomolecules) are carbohydrates, lipids, proteins, and nucleic acids. Many biological molecules are polymers: in this terminology, monomers are relatively small macromolecules that are linked together to create large macromolecules known as polymers. When monomers are linked together to synthesize a biological polymer, they undergo a process called dehydration synthesis. Different macromolecules can assemble in larger complexes, often needed for biological activity.
|
Biochemistry
| 0.854551
|
261
|
Nevertheless, composition of relations and manipulation of the operators according to Schröder rules, provides a calculus to work in the power set of A × B . {\displaystyle A\times B.} In contrast to homogeneous relations, the composition of relations operation is only a partial function. The necessity of matching range to domain of composed relations has led to the suggestion that the study of heterogeneous relations is a chapter of category theory as in the category of sets, except that the morphisms of this category are relations. The objects of the category Rel are sets, and the relation-morphisms compose as required in a category.
|
Binary relations
| 0.854406
|
262
|
Developments in algebraic logic have facilitated usage of binary relations. The calculus of relations includes the algebra of sets, extended by composition of relations and the use of converse relations. The inclusion R ⊆ S , {\displaystyle R\subseteq S,} meaning that aRb implies aSb, sets the scene in a lattice of relations. But since P ⊆ Q ≡ ( P ∩ Q ¯ = ∅ ) ≡ ( P ∩ Q = P ) , {\displaystyle P\subseteq Q\equiv (P\cap {\bar {Q}}=\varnothing )\equiv (P\cap Q=P),} the inclusion symbol is superfluous.
|
Binary relations
| 0.854406
|
263
|
Binary relations have been described through their induced concept lattices: A concept C ⊂ R satisfies two properties: (1) The logical matrix of C is the outer product of logical vectors C i j = u i v j , u , v {\displaystyle C_{ij}\ =\ u_{i}v_{j},\quad u,v} logical vectors. (2) C is maximal, not contained in any other outer product. Thus C is described as a non-enlargeable rectangle.For a given relation R ⊆ X × Y , {\displaystyle R\subseteq X\times Y,} the set of concepts, enlarged by their joins and meets, forms an "induced lattice of concepts", with inclusion ⊑ {\displaystyle \sqsubseteq } forming a preorder. The MacNeille completion theorem (1937) (that any partial order may be embedded in a complete lattice) is cited in a 2013 survey article "Decomposition of relations on concept lattices".
|
Binary relations
| 0.854406
|
264
|
However, NMR experiments are able to provide information from which a subset of distances between pairs of atoms can be estimated, and the final possible conformations for a protein are determined by solving a distance geometry problem. Dual polarisation interferometry is a quantitative analytical method for measuring the overall protein conformation and conformational changes due to interactions or other stimulus. Circular dichroism is another laboratory technique for determining internal β-sheet / α-helical composition of proteins.
|
Structural proteins
| 0.854386
|
265
|
A key question in molecular biology is how proteins evolve, i.e. how can mutations (or rather changes in amino acid sequence) lead to new structures and functions? Most amino acids in a protein can be changed without disrupting activity or function, as can be seen from numerous homologous proteins across species (as collected in specialized databases for protein families, e.g. PFAM). In order to prevent dramatic consequences of mutations, a gene may be duplicated before it can mutate freely.
|
Structural proteins
| 0.854386
|
266
|
Circuit theory deals with electrical networks where the fields are largely confined around current carrying conductors. In such circuits, even Maxwell's equations can be dispensed with and simpler formulations used. On the other hand, a quantum treatment of electromagnetism is important in chemistry. Chemical reactions and chemical bonding are the result of quantum mechanical interactions of electrons around atoms. Quantum considerations are also necessary to explain the behaviour of many electronic devices, for instance the tunnel diode.
|
Introduction to electromagnetism
| 0.854357
|
267
|
Classical physics is still an accurate approximation in most situations involving macroscopic objects. With few exceptions, quantum theory is only necessary at the atomic scale and a simpler classical treatment can be applied. Further simplifications of treatment are possible in limited situations.
|
Introduction to electromagnetism
| 0.854357
|
268
|
Albert Einstein showed that the magnetic field arises through the relativistic motion of the electric field and thus magnetism is merely a side effect of electricity. The modern theoretical treatment of electromagnetism is as a quantum field in quantum electrodynamics. In many situations of interest to electrical engineering, it is not necessary to apply quantum theory to get correct results.
|
Introduction to electromagnetism
| 0.854357
|
269
|
The fundamental law that describes the gravitational force on a massive object in classical physics is Newton's law of gravity. Analogously, Coulomb's law is the fundamental law that describes the force that charged objects exert on one another. It is given by the formula F = k e q 1 q 2 r 2 {\displaystyle F=k_{\text{e}}{q_{1}q_{2} \over r^{2}}} where F is the force, ke is the Coulomb constant, q1 and q2 are the magnitudes of the two charges, and r2 is the square of the distance between them. It describes the fact that like charges repel one another whereas opposite charges attract one another and that the stronger the charges of the particles, the stronger the force they exert on one another. The law is also an inverse square law which means that as the distance between two particles is doubled, the force on them is reduced by a factor of four.
|
Introduction to electromagnetism
| 0.854357
|
270
|
They are needed to convert high voltage mains electricity into low voltage electricity which can be safely used in homes. Maxwell's formulation of the law is given in the Maxwell–Faraday equation—the fourth and final of Maxwell's equations—which states that a time-varying magnetic field produces an electric field. Together, Maxwell's equations provide a single uniform theory of the electric and magnetic fields and Maxwell's work in creating this theory has been called "the second great unification in physics" after the first great unification of Newton's law of universal gravitation.
|
Introduction to electromagnetism
| 0.854357
|
271
|
In physics, fields are entities that interact with matter and can be described mathematically by assigning a value to each point in space and time. Vector fields are fields which are assigned both a numerical value and a direction at each point in space and time. Electric charges produce a vector field called the electric field. The numerical value of the electric field, also called the electric field strength, determines the strength of the electric force that a charged particle will feel in the field and the direction of the field determines which direction the force will be in.
|
Introduction to electromagnetism
| 0.854357
|
272
|
The discovery that certain toxic chemicals administered in combination can cure certain cancers ranks as one of the greatest in modern medicine. Childhood ALL (Acute Lymphoblastic Leukemia), testicular cancer, and Hodgkins disease, previously universally fatal, are now generally curable diseases. They have also proved effective in the adjuvant setting, in reducing the risk of recurrence after surgery for high-risk breast cancer, colon cancer, and lung cancer, among others. The overall impact of chemotherapy on cancer survival can be difficult to estimate, since improved cancer screening, prevention (e.g. anti-smoking campaigns), and detection all influence statistics on cancer incidence and mortality.
|
Combination chemotherapy
| 0.854107
|
273
|
Molecular genetics has uncovered signalling networks that regulate cellular activities such as proliferation and survival. In a particular cancer, such a network may be radically altered, due to a chance somatic mutation. Targeted therapy inhibits the metabolic pathway that underlies that type of cancer's cell division.
|
Combination chemotherapy
| 0.854107
|
274
|
Binary GCD algorithm: Efficient way of calculating GCD. Booth's multiplication algorithm Chakravala method: a cyclic algorithm to solve indeterminate quadratic equations, including Pell's equation Discrete logarithm: Baby-step giant-step Index calculus algorithm Pollard's rho algorithm for logarithms Pohlig–Hellman algorithm Euclidean algorithm: computes the greatest common divisor Extended Euclidean algorithm: also solves the equation ax + by = c Integer factorization: breaking an integer into its prime factors Congruence of squares Dixon's algorithm Fermat's factorization method General number field sieve Lenstra elliptic curve factorization Pollard's p − 1 algorithm Pollard's rho algorithm prime factorization algorithm Quadratic sieve Shor's algorithm Special number field sieve Trial division Multiplication algorithms: fast multiplication of two numbers Karatsuba algorithm Schönhage–Strassen algorithm Toom–Cook multiplication Modular square root: computing square roots modulo a prime number Tonelli–Shanks algorithm Cipolla's algorithm Berlekamp's root finding algorithm Odlyzko–Schönhage algorithm: calculates nontrivial zeroes of the Riemann zeta function Lenstra–Lenstra–Lovász algorithm (also known as LLL algorithm): find a short, nearly orthogonal lattice basis in polynomial time Primality tests: determining whether a given number is prime AKS primality test Baillie–PSW primality test Fermat primality test Lucas primality test Miller–Rabin primality test Sieve of Atkin Sieve of Eratosthenes Sieve of Sundaram
|
Combinatorial algorithms
| 0.854075
|
275
|
Basic Local Alignment Search Tool also known as BLAST: an algorithm for comparing primary biological sequence information Kabsch algorithm: calculate the optimal alignment of two sets of points in order to compute the root mean squared deviation between two protein structures. Velvet: a set of algorithms manipulating de Bruijn graphs for genomic sequence assembly Sorting by signed reversals: an algorithm for understanding genomic evolution. Maximum parsimony (phylogenetics): an algorithm for finding the simplest phylogenetic tree to explain a given character matrix. UPGMA: a distance-based phylogenetic tree construction algorithm. Bloom Filter: probabilistic data structure used to test for the existence of an element within a set. Primarily used in bioinformatics to test for the existence of a k-mer in a sequence or sequences.
|
Combinatorial algorithms
| 0.854075
|
276
|
Closest pair problem: find the pair of points (from a set of points) with the smallest distance between them Collision detection algorithms: check for the collision or intersection of two given solids Cone algorithm: identify surface points Convex hull algorithms: determining the convex hull of a set of points Graham scan Quickhull Gift wrapping algorithm or Jarvis march Chan's algorithm Kirkpatrick–Seidel algorithm Euclidean distance transform: computes the distance between every point in a grid and a discrete collection of points. Geometric hashing: a method for efficiently finding two-dimensional objects represented by discrete points that have undergone an affine transformation Gilbert–Johnson–Keerthi distance algorithm: determining the smallest distance between two convex shapes. Jump-and-Walk algorithm: an algorithm for point location in triangulations Laplacian smoothing: an algorithm to smooth a polygonal mesh Line segment intersection: finding whether lines intersect, usually with a sweep line algorithm Bentley–Ottmann algorithm Shamos–Hoey algorithm Minimum bounding box algorithms: find the oriented minimum bounding box enclosing a set of points Nearest neighbor search: find the nearest point or points to a query point Nesting algorithm: make the most efficient use of material or space Point in polygon algorithms: tests whether a given point lies within a given polygon Point set registration algorithms: finds the transformation between two point sets to optimally align them. Rotating calipers: determine all antipodal pairs of points and vertices on a convex polygon or convex hull. Shoelace algorithm: determine the area of a polygon whose vertices are described by ordered pairs in the plane Triangulation Delaunay triangulation Ruppert's algorithm (also known as Delaunay refinement): create quality Delaunay triangulations Chew's second algorithm: create quality constrained Delaunay triangulations Marching triangles: reconstruct two-dimensional surface geometry from an unstructured point cloud Polygon triangulation algorithms: decompose a polygon into a set of triangles Voronoi diagrams, geometric dual of Delaunay triangulation Bowyer–Watson algorithm: create voronoi diagram in any number of dimensions Fortune's Algorithm: create voronoi diagram Quasitriangulation
|
Combinatorial algorithms
| 0.854075
|
277
|
Clock synchronization Berkeley algorithm Cristian's algorithm Intersection algorithm Marzullo's algorithm Consensus (computer science): agreeing on a single value or history among unreliable processors Chandra–Toueg consensus algorithm Paxos algorithm Raft (computer science) Detection of Process Termination Dijkstra-Scholten algorithm Huang's algorithm Lamport ordering: a partial ordering of events based on the happened-before relation Leader election: a method for dynamically selecting a coordinator Bully algorithm Mutual exclusion Lamport's Distributed Mutual Exclusion Algorithm Naimi-Trehel's log(n) Algorithm Maekawa's Algorithm Raymond's Algorithm Ricart–Agrawala Algorithm Snapshot algorithm: record a consistent global state for an asynchronous system Chandy–Lamport algorithm Vector clocks: generate a partial ordering of events in a distributed system and detect causality violations
|
Combinatorial algorithms
| 0.854075
|
278
|
The nearest neighbour search problem arises in numerous fields of application, including: Pattern recognition – in particular for optical character recognition Statistical classification – see k-nearest neighbor algorithm Computer vision – for point cloud registration Computational geometry – see Closest pair of points problem Cryptanalysis – for lattice problem Databases – e.g. content-based image retrieval Coding theory – see maximum likelihood decoding Semantic Search Data compression – see MPEG-2 standard Robotic sensing Recommendation systems, e.g. see Collaborative filtering Internet marketing – see contextual advertising and behavioral targeting DNA sequencing Spell checking – suggesting correct spelling Plagiarism detection Similarity scores for predicting career paths of professional athletes. Cluster analysis – assignment of a set of observations into subsets (called clusters) so that observations in the same cluster are similar in some sense, usually based on Euclidean distance Chemical similarity Sampling-based motion planning
|
Nearest neighbor problem
| 0.853979
|
279
|
In the special case where the data is a dense 3D map of geometric points, the projection geometry of the sensing technique can be used to dramatically simplify the search problem. This approach requires that the 3D data is organized by a projection to a two-dimensional grid and assumes that the data is spatially smooth across neighboring grid cells with the exception of object boundaries. These assumptions are valid when dealing with 3D sensor data in applications such as surveying, robotics and stereo vision but may not hold for unorganized data in general. In practice this technique has an average search time of O(1) or O(K) for the k-nearest neighbor problem when applied to real world stereo vision data.
|
Nearest neighbor problem
| 0.853979
|
280
|
As an ensemble, the Bayes optimal classifier represents a hypothesis that is not necessarily in H {\displaystyle H} . The hypothesis represented by the Bayes optimal classifier, however, is the optimal hypothesis in ensemble space (the space of all possible ensembles consisting only of hypotheses in H {\displaystyle H} ). This formula can be restated using Bayes' theorem, which says that the posterior is proportional to the likelihood times the prior: P ( h i | T ) ∝ P ( T | h i ) P ( h i ) {\displaystyle P(h_{i}|T)\propto P(T|h_{i})P(h_{i})} hence, y = a r g m a x c j ∈ C ∑ h i ∈ H P ( c j | h i ) P ( h i | T ) {\displaystyle y={\underset {c_{j}\in C}{\mathrm {argmax} }}\sum _{h_{i}\in H}{P(c_{j}|h_{i})P(h_{i}|T)}}
|
Ensemble Methods
| 0.853971
|
281
|
While speech recognition is mainly based on deep learning because most of the industry players in this field like Google, Microsoft and IBM reveal that the core technology of their speech recognition is based on this approach, speech-based emotion recognition can also have a satisfactory performance with ensemble learning.It is also being successfully used in facial emotion recognition.
|
Ensemble Methods
| 0.853971
|
282
|
The content within the book is written using a question and answer format. It contains some 250 questions, which The Science Teacher states each are answered with a "concise and well-formulated essay that is informative and readable." The Science Teacher review goes on to state that many of the answers given in the book are "little gems of science writing". The Science Teacher summarizes by stating that each question is likely to be thought of by a student, and that "the answers are informative, well constructed, and thorough".The book covers information about the planets, the Earth, the Universe, practical astronomy, history, and awkward questions such as astronomy in the Bible, UFOs, and aliens. Also covered are subjects such as the Big Bang, comprehension of large numbers, and the Moon illusion.
|
A Question and Answer Guide to Astronomy
| 0.853933
|
283
|
A Question and Answer Guide to Astronomy is a book about astronomy and cosmology, and is intended for a general audience. The book was written by Pierre-Yves Bely, Carol Christian, and Jean-Rene Roy, and published in English by Cambridge University Press in 2010. It was originally written in French.
|
A Question and Answer Guide to Astronomy
| 0.853933
|
284
|
The degree can be used to generalize Bézout's theorem in an expected way to intersections of n hypersurfaces in Pn. == Notes ==
|
Degree (algebraic geometry)
| 0.853915
|
285
|
A generalization of Bézout's theorem asserts that, if an intersection of n projective hypersurfaces has codimension n, then the degree of the intersection is the product of the degrees of the hypersurfaces. The degree of a projective variety is the evaluation at 1 of the numerator of the Hilbert series of its coordinate ring. It follows that, given the equations of the variety, the degree may be computed from a Gröbner basis of the ideal of these equations.
|
Degree (algebraic geometry)
| 0.853915
|
286
|
This is a generalization of Bézout's theorem (For a proof, see Hilbert series and Hilbert polynomial § Degree of a projective variety and Bézout's theorem). The degree is not an intrinsic property of the variety, as it depends on a specific embedding of the variety in an affine or projective space. The degree of a hypersurface is equal to the total degree of its defining equation.
|
Degree (algebraic geometry)
| 0.853915
|
287
|
In mathematics, the degree of an affine or projective variety of dimension n is the number of intersection points of the variety with n hyperplanes in general position. For an algebraic set, the intersection points must be counted with their intersection multiplicity, because of the possibility of multiple components. For (irreducible) varieties, if one takes into account the multiplicities and, in the affine case, the points at infinity, the hypothesis of general position may be replaced by the much weaker condition that the intersection of the variety has the dimension zero (that is, consists of a finite number of points).
|
Degree (algebraic geometry)
| 0.853915
|
288
|
Actin was first observed experimentally in 1887 by W.D. Halliburton, who extracted a protein from muscle that 'coagulated' preparations of myosin that he called "myosin-ferment". However, Halliburton was unable to further refine his findings, and the discovery of actin is credited instead to Brunó Ferenc Straub, a young biochemist working in Albert Szent-Györgyi's laboratory at the Institute of Medical Chemistry at the University of Szeged, Hungary. Following up on the discovery of Ilona Banga & Szent-Györgyi in 1941 that the coagulation only occurs in some myosin extractions and was reversed upon the addition of ATP, Straub identified and purified actin from those myosin preparations that did coagulate.
|
F actin
| 0.853902
|
289
|
It is possible that actin could be applied to nanotechnology as its dynamic ability has been harnessed in a number of experiments including those carried out in acellular systems. The underlying idea is to use the microfilaments as tracks to guide molecular motors that can transport a given load. That is actin could be used to define a circuit along which a load can be transported in a more or less controlled and directed manner.
|
F actin
| 0.853902
|
290
|
Actin is used in scientific and technological laboratories as a track for molecular motors such as myosin (either in muscle tissue or outside it) and as a necessary component for cellular functioning. It can also be used as a diagnostic tool, as several of its anomalous variants are related to the appearance of specific pathologies. Nanotechnology. Actin-myosin systems act as molecular motors that permit the transport of vesicles and organelles throughout the cytoplasm.
|
F actin
| 0.853902
|
291
|
Actin can spontaneously acquire a large part of its tertiary structure. However, the way it acquires its fully functional form from its newly synthesized native form is special and almost unique in protein chemistry. The reason for this special route could be the need to avoid the presence of incorrectly folded actin monomers, which could be toxic as they can act as inefficient polymerization terminators. Nevertheless, it is key to establishing the stability of the cytoskeleton, and additionally, it is an essential process for coordinating the cell cycle.CCT is required in order to ensure that folding takes place correctly.
|
F actin
| 0.853902
|
292
|
A number of natural toxins that interfere with actin's dynamics are widely used in research to study actin's role in biology. Latrunculin – a toxin produced by sponges – binds to G-actin preventing it from joining microfilaments. Cytochalasin D – produced by certain fungi – serves as a capping factor, binding to the (+) end of a filament and preventing further addition of actin molecules. In contrast, the sponge toxin jasplakinolide promotes the nucleation of new actin filaments by binding and stabilzing pairs of actin molecules. Phalloidin – from the "death cap" mushroom Amanita phalloides – binds to adjacent actin molecules within the F-actin filament, stabilizing the filament and preventing its depolymerization.Phalloidin is often labelled with fluorescent dyes to visualize actin filaments by fluorescence microscopy.
|
F actin
| 0.853902
|
293
|
For example, in medicine it can be used to identify, diagnose and potentially develop treatments for genetic diseases. Similarly, research into pathogens may lead to treatments for contagious diseases. Biotechnology is a burgeoning discipline, with the potential for many useful products and services.
|
Nucleic acid sequence
| 0.853895
|
294
|
The electric field strength at a specific point can be determined from the power delivered to the transmitting antenna, its geometry and radiation resistance. Consider the case of a center-fed half-wave dipole antenna in free space, where the total length L is equal to one half wavelength (λ/2). If constructed from thin conductors, the current distribution is essentially sinusoidal and the radiating electric field is given by E θ ( r ) = − j I ∘ 2 π ε 0 c r cos ( π 2 cos θ ) sin θ e j ( ω t − k r ) {\displaystyle E_{\theta }(r)={-jI_{\circ } \over 2\pi \varepsilon _{0}c\,r}{\cos \left(\scriptstyle {\pi \over 2}\cos \theta \right) \over \sin \theta }e^{j\left(\omega t-kr\right)}} where θ {\displaystyle \scriptstyle {\theta }} is the angle between the antenna axis and the vector to the observation point, I ∘ {\displaystyle \scriptstyle {I_{\circ }}} is the peak current at the feed-point, ε 0 = 8.85 × 10 − 12 F / m {\displaystyle \scriptstyle {\varepsilon _{0}\,=\,8.85\times 10^{-12}\,F/m}} is the permittivity of free-space, c = 3 × 10 8 m / S {\displaystyle \scriptstyle {c\,=\,3\times 10^{8}\,m/S}} is the speed of light in vacuum, and r {\displaystyle \scriptstyle {r}} is the distance to the antenna in meters. When the antenna is viewed broadside ( θ = π / 2 {\displaystyle \scriptstyle {\theta \,=\,\pi /2}} ) the electric field is maximum and given by | E π / 2 ( r ) | = I ∘ 2 π ε 0 c r .
|
Electric field strength
| 0.853894
|
295
|
Given any field extension L/K, we can consider its automorphism group Aut(L/K), consisting of all field automorphisms α: L → L with α(x) = x for all x in K. When the extension is Galois this automorphism group is called the Galois group of the extension. Extensions whose Galois group is abelian are called abelian extensions. For a given field extension L/K, one is often interested in the intermediate fields F (subfields of L that contain K). The significance of Galois extensions and Galois groups is that they allow a complete description of the intermediate fields: there is a bijection between the intermediate fields and the subgroups of the Galois group, described by the fundamental theorem of Galois theory.
|
Degree (field theory)
| 0.853858
|
296
|
An algebraic extension L/K is called normal if every irreducible polynomial in K that has a root in L completely factors into linear factors over L. Every algebraic extension F/K admits a normal closure L, which is an extension field of F such that L/K is normal and which is minimal with this property. An algebraic extension L/K is called separable if the minimal polynomial of every element of L over K is separable, i.e., has no repeated roots in an algebraic closure over K. A Galois extension is a field extension that is both normal and separable. A consequence of the primitive element theorem states that every finite separable extension has a primitive element (i.e. is simple).
|
Degree (field theory)
| 0.853858
|
297
|
Given a field extension, one can "extend scalars" on associated algebraic objects. For example, given a real vector space, one can produce a complex vector space via complexification. In addition to vector spaces, one can perform extension of scalars for associative algebras defined over the field, such as polynomials or group algebras and the associated group representations. Extension of scalars of polynomials is often used implicitly, by just considering the coefficients as being elements of a larger field, but may also be considered more formally. Extension of scalars has numerous applications, as discussed in extension of scalars: applications.
|
Degree (field theory)
| 0.853857
|
298
|
This is the topic of the scientific field of structural biology, which employs techniques such as X-ray crystallography, NMR spectroscopy, cryo-electron microscopy (cryo-EM) and dual polarisation interferometry, to determine the structure of proteins. Protein structures range in size from tens to several thousand amino acids.
|
Protein Structure
| 0.853837
|
299
|
Testosterone was identified as 17β-hydroxyandrost-4-en-3-one (C19H28O2), a solid polycyclic alcohol with a hydroxyl group at the 17th carbon atom. This also made it obvious that additional modifications on the synthesized testosterone could be made, i.e., esterification and alkylation. The partial synthesis in the 1930s of abundant, potent testosterone esters permitted the characterization of the hormone's effects, so that Kochakian and Murlin (1936) were able to show that testosterone raised nitrogen retention (a mechanism central to anabolism) in the dog, after which Allan Kenyon's group was able to demonstrate both anabolic and androgenic effects of testosterone propionate in eunuchoidal men, boys, and women. The period of the early 1930s to the 1950s has been called "The Golden Age of Steroid Chemistry", and work during this period progressed quickly.
|
Testosterone
| 0.853807
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.