Modern and statistical methods of confinement. Statistical methods. Statistical analysis of specific data

When conducting psychological and pedagogical research, an important role is played by mathematical methods of modeling processes and processing experimental data. Before such methods, we should mention, above all, probabilistic-statistical methods of investigation. Thus, the behavior of the people in his activity, and the people of the team, is infused with the blindness of the bureaucrats. The randomness does not allow one to describe phenomena within the framework of deterministic models, which manifests itself as a lack of regularity in mass phenomena and, therefore, does not allow one to reliably convey the presence of songs. However, even from such phenomena, distinct patterns emerge. Irregularity, which dominates episodic episodes, is often compensated for by the emergence of statistical regularities, stabilization of the frequency of episodes of episodic episodes. Well, these phenomenal pods seem to have a song of virtuousness. There are two principles of probabilistic-statistical methods of psychological and pedagogical research: classical and non-classical. Let's do it annual analysis these methods.

Classic international-statistical method. The classic scientific-statistical method of investigation is based on the theory of probability and mathematical statistics. This method There are many manifestations of episodic nature, which include a number of stages, mainly attacks.

1. Establish a universal model of reality, based on the analysis of statistical data (based on the law of division of variable magnitude). It is natural that the patterns of mass epileptic phenomena are expressed more clearly when more statistical material is used. Vibirkov's data, taken away at the time of the experiment, are now interchanged and appear, strictly speaking, to be of an episodic nature. In connection with this, an important role is played by clarifying the patterns observed in the choices, and expanding them to the entire general set of objects. This principle is based on a hypothesis about the nature of the statistical regularity that manifests itself in the observed phenomenon, for example, a hypothesis about the fact that the phenomenon being investigated is subject to the law of normal division. Such a hypothesis is called the null hypothesis, which can be revealed by Milkova, and in addition to the null hypothesis there is also an alternative and competing hypothesis. Testing the extent to which experimental data support one or another statistical hypothesis involves the use of so-called nonparametric statistical tests or similar criteria. Currently, the criteria of Kolmogorov, Smirnov, omega square, etc. are widely used. The main idea of ​​these criteria lies in the difference between the function of the empirical division and the function of the local theoretical division. The methodology for testing statistical hypotheses is highly fragmented and published in a large number of workers in mathematical statistics.

2. Carrying out the necessary calculations using mathematical methods within the framework of the global model. It is obvious that before installing a modern model of the box, the calculation of characteristic parameters is carried out, for example, such as mathematical calculation or mean value, dispersion, standard variation, mode, median, indicator of asymmetry, etc.

3. Interpretation of incredibly statistical data from a completely real situation.

Nowadays, the classic statistical method of good analysis is widely used when conducting research in various natural, technical and Suspension Sciences. Report description The essence of this method and its distillation to the most specific tasks can be found in a great number of literary works, for example.

Non-classical innovative-statistical method. The non-classical statistical-statistical method differs from the classical one in that it is limited not only to the mass, but also to the most extreme ones, which may have a fundamentally random character. This method can be used effectively when analyzing the behavior of an individual in the process of learning and other activities, for example, in the process of learning knowledge. The features of the non-classical probabilistic-statistical method of psychological and pedagogical research will be examined in the context of the behavior of students in the process of acquiring knowledge.

For the first time, the probabilistic-statistical model of behavior of students in the process of acquiring knowledge was derived from work. Further development of this model was carried out at work. Valuable as a type of activity, meta of what human knowledge is, know and become a beginner, lie in line with the development of educational knowledge. The structure of information includes such cognitive processes as perception, comprehension, memory, thinking, and awareness. Analysis of these processes shows that there are inherent elements of episodicity, reflecting the episodic nature of the mental and somatic states of the individual, as well as physiological, psychological and informational noise during the course of the work. and brain. The remainder led, when describing mental processes, to transform the model of a deterministic dynamic system into a model of a phased dynamic system. This means that the determinism of information is realized through variability. It is possible to develop a concept that people know, which is actually a product of knowledge, may also have an episodic nature, and, therefore, to describe the behavior of the skin surroundings in the process of acquired knowledge can be extremely important -statistical method.

It is clear that this method of study is identified by the function of the division (the intensity of its diversity), which means the abundance of its occurrence in a single galloon of information space. In the process, the function of the division with which learning is identified, evolving, is collapsing in the information space. Each person is taught individual power and independent localization (space and kinematic) of individuals is allowed.

On the basis of the law of conservation of balance, a system of differential equations is written, which is equal to the continuity of balance, which involves a change in the strength of the balance per hour in the phase space (space of coordinates, type bones and acceleration of different orders) with divergence to the flow of strength and strength in the analyzed phase space. An analysis of analytical solutions of low levels of continuity (functions of subdivisions) that characterize the behavior of other scientists in the process of initiation has been carried out.

In the course of conducting experimental studies of the behavior of students in the process of acquired knowledge, probabilistic-statistical scaling is carried out, according to which scale of expressions the system is ordered. , where A - a completely ordered system of impersonal objects (individuals), which contains signs that are meaningful to us (empirical system with holes); Ly - functional space (space of the function of the subdivision) with lines; F - operation of homomorphic transformation of A subsystem Ly; G – group of permissible transformations; f - the operation of displaying the function of the division from the subsystem Ly on a numerical system with bins of the n-dimensional space M. The probability-statistical scaling is set to find and process experimental functions of the division and includes three stages.

1. Knowing the experimental functions of the subsection based on the results of the control run, for example, testing. A typical view of the individual functions of the division, found under the 20-point scale, is shown in Fig. 1. The method for finding such functions is described in.

2. Representation of the function of division into numerical space. Why is it necessary to differentiate between the individual functions of the division? In fact, it is necessary to distinguish between the designated moments of the first order (mathematical calculation), the second order (dispersion) and the third order, which characterizes the asymmetry of the division function.

3. Ranking of students with equal knowledge of the equalization of moments of different orders of their individual functions of the division.

Small 1. Typical type of individual functions of a group of students, which were observed in the study of foreign physics, various ratings: 1 – traditional rating “2”; 2 – traditional rating “3”; 3 – traditional rating “4”; 4 – traditional rating “5”

Based on the adaptivity of individual functions of the division, experimental functions of the division were found for the flow of students (Fig. 2).


Small 2. The evolution of a new function among the divisional flow of students, approximated by smooth lines: 1 – after the first year; 2 – after another course; 3 – after the third year; 4 – after the fourth year; 5 - after the fifth course

Analysis of the data shown in Fig. 2 shows that in the world of information space, functions are divided into sections. This occurs through the fact that the mathematically derived functions of the divisions of individuals collapse with different liquids, and the functions themselves spread through dispersion. Further analysis of these functions of the subdivision can be carried out within the framework of the classic international-statistical method.

Discussion of results. Analysis of classical and non-classical scientific-statistical methods of psychological and pedagogical research has shown that there is essential validity between them. As can be understood from the above, it is believed that the classical stasis method is limited to the analysis of mass units, and the non-classical stasis method is limited to the analysis of both mass and single units. In connection with this, the classical method can be intellectually called the mass homogeneous-statistical method (MBSM), and the non-classical method can be called the individual homogeneous-statistical method (IVSM). U 4] shows that from classical methods of assessing the knowledge of students within the framework of the statistical-statistical model of the individual there cannot be any stagnation for these purposes.

The main results of the methods of MVSM and IVSM will be considered in the application of renewing the knowledge of academics. With this method, we will conduct a clear experiment. Let's assume that great quantity absolutely independent of the mental and physical characteristics of the students, which loom however as a prehistory, and despite the stink, not interacting with one another, simultaneously take part in that very cognitive process, seeing absolutely, however, a strictly determined action. Thus, in line with the classical phenomena about the objects of vimirvania, all scientists would be able to draw new estimates of the completeness of knowledge with any given accuracy of vimirvania. We really want to achieve great accuracy in assessing the completeness of knowledge among students. It is impossible to explain this result within the framework of MVSM, since it is clearly transferred that the flow onto absolutely new non-interacting studies has a strictly deterministic character. The classic probabilistic-statistical method does not support the fact that the determinism of the learning process is realized through variability, internal power and cutaneous learning. superfluous light to the individual.

The episodic nature of the behavior of the student in the process of acquiring knowledge in the IVSM. The use of an individual probabilistic-statistical method for analyzing the behavior of an analyzed and idealized team of scientists has shown that it is not possible to accurately indicate the development of skin learning in the information space; They have one or another area of ​​information space. In fact, the skin study is identified by the individual function of the division, and its parameters, such as mathematical calculation, dispersion and others, are individual to the skin study. This means that the individual functions of the division will be in different areas of the information space. The reason for this behavior of students lies in the episodic nature of the learning process.

However, in a number of cases, the research results obtained within the framework of the IVSM can be interpreted within the framework of the IVSM. It is acceptable that the investor in the assessment of knowledge of the study of Vikorist has a five-point scale of extinctions. And here the kidnapping in the assessment is set to ±0.5 points. Therefore, if a student is given a score, for example, 4 points, this means that his knowledge ranges from 3.5 to 4.5 points. In fact, the position of an individual in the information space in this phase is determined by the direct function of the division, the width of which is equal to ±0.5 points, and the assessment is based on mathematical calculations. This loss of the table is great, which allows you to protect the same type of function of the division. However, regardless of such a rough approximation of the function of the division, adaptation to evolution allows one to eliminate important information both about the behavior of a particular individual and the team of students as a whole.

The result of the renewed knowledge of the study is indirectly and indirectly infused with the knowledge of the vykladach (virtualist), who also has a powerful streak. In the process of pedagogical worlds, there is actually an interaction between two dynamic dynamic systems that identify the learning behavior and output of the process. The interaction of the student subsystem with the professorial-computer subsystem is examined and it is shown that the fluidity of the mathematical understanding of individual functions of the student group in the information space is proportional to the func The infusion of the professorial and scientific team is wrapped in a proportional function of inertia, which characterizes the inability to change the position of the mathematical mechanism).

At this time, regardless of the significant achievements of the development of the theoretical and practical foundations of extinctions during psychological and pedagogical studies, the problem of extinctions is still far from being resolved. This is due, first of all, to the fact that there is still no sufficient information about the infusion of information into the process of extinction. A similar situation has arisen and the problem of quantum mechanics is dying out. Thus, when considering the conceptual problems of the quantum theory of vimirs, it is said that it is hardly possible to resolve certain paradoxes of vimirs in quantum mechanics without directly including the information of the observer in the theoretical description of the quantum vimir " It goes on to say that “... it is not overly sensitive to the fact that information can be generated in a possible way, since due to the laws of physics (quantum mechanics) the certainty of this idea is small. It is very important to clarify the formula: the knowledge of a spy can be very reliable in helping to achieve this goal.”

U scientific knowledge functioning is complex, dynamic, integral, subordinated system of different methods, which will settle on at various stages and the level of knowledge. Thus, the process of scientific research involves a variety of fundamental scientific methods and methods of knowledge both on the empirical and on the theoretical levels. In their own way, the scientific methods, as already stated, include a system of empirical, scientific and theoretical methods and methods of understanding real activity.

1. Behind-the-scenes methods of scientific research

Zagalological methods are important for theoretical scientific research, although actions from them can be established on the empirical level. What are these methods and what is their essence?

One of them, which is widely recognized in scientific research, is analysis method (from the Greek. Analysis - decomposition, dismemberment) - a method of scientific knowledge, which means the obvious dismemberment of the object under investigation into warehouse elements according to the twist of its structure, adjacent signs, authorities, internal connections, apparently syn.

Analysis allows the investigator to penetrate into the essence of the phenomenon, which involves the process of dismembering it into storage elements and revealing the main idea. Analysis as a logical operation enters the warehouse of any scientific research and therefore completes its first stage, if the investigator goes from an undifferentiated description of the object to the identification of its existence, warehouse, as well as its authorities, connections iv. The analysis of the present is already at the sensitive level of cognition, and the process will begin to appear. Theoretically, knowledge begins to function as a higher form of analysis - manifest or abstract-logical analysis, which arises simultaneously with the skills of material-practical dismemberment of objects in the process of work. Step by step, the people were aware of the importance of pressing the material and practical analysis of the obvious analysis.

It should be said that, being a necessary method of knowledge, analysis is only one of the moments in the process of scientific investigation. It is impossible to know the essence of an object, apart from its dissected elements, and this is how it develops. For example, a chemist, in the words of Hegel, places a piece of meat in his retort, submits it to various operations, and then declares: I know that meat is composed of sour, coal, water, etc. Ale and speech - elements are no longer the essence of meat.

The skin area knows its own inter-membership of the object, after which we move on to a different character of power and laws. Once the drug has been administered to the analysis, the stage of knowledge – synthesis – begins.

Synthesis (from the Greek synthesis - connection, connection, folding) - this is a method of scientific knowledge, which has a clear understanding of the warehouse sides, elements, authorities, connections of the subject object, dismembered as a result of the analysis, and the development of this object as one whole.

Synthesis is a more complete, eclectic combination of parts, elements of the whole, and dialectically based on the seen essence. The result of the synthesis is a completely new creation, the power of which is not only the external connection of these components, but also the result of their internal interconnection and interdependence.

The analysis captures most importantly those specific features that separate the parts from each other. Synthesis reveals the hidden essence that binds the parts together.

The reader of the idea dissects the object in the warehouse in order to first identify the parts themselves, find out what makes up the whole, and then look at how these parts are put together, which are already wrapped around each other. Analysis and synthesis have a dialectical unity: our understanding is as analytical as it is synthetic.

Analysis and synthesis take their cue from practical activity. Steadily separating various objects from their warehouses in their practical activities, people gradually began to separate objects and thoughts. Practical activity developed as a result of the dismemberment of objects, and the assembly of elements into a single whole. On this basis, a clear analysis and synthesis was carried out step by step.

Depending on the nature of the investigation of the object and the depth of penetration of its essence, it is determined by different types of analysis and synthesis.

1. Direct or empirical analysis and synthesis - usually stagnates at the stage of superficial awareness of the object. This type of analysis and synthesis makes it possible to identify the identity of the investigated object.

2. Elementary theoretical analysis and synthesis are widely used as a means of understanding the essence of the object under investigation. The result of such an analysis and synthesis is the establishment of causal and hereditary connections and the identification of various patterns.

3. Structural-genetic analysis and synthesis - allows you to delve deeper into the essence of the investigated object. This type of analysis and synthesis emphasizes the articulation of such elements from a collapsible box that become the most important, the most important and give the greatest influx to all other sides of the object that is being studied.

Methods of analysis and synthesis in the process of scientific research function in an integral connection with the method of abstraction.

Abstraction (In Latv. Abstractio - abstraction) - this is a secret method of scientific knowledge, which is clearly a manifestation of the non-network authorities, connections, and the details of objects that are involved, which is immediately visible we don’t have any realities, to cite the descendants of the parties, authorities, connections of these items. The essence of this is that speech, power and the formulation of thoughts are seen and are often separated from other speeches, powers, ideas and are seen as in “pure appearance”.

Abstraction in the rational activity of a person has a universal character, both in terms of thoughts associated with this process, and with the results of its results. The essence of this method lies in the fact that it allows thoughts to escape from the network, other authorities, links, objects and at the same time thoughts to see, record the investigation of the parties, authorities, communications viscous objects.

The process of abstraction is distinguished from the result of this process, which is called abstraction. The result of abstraction means knowledge about the actions of the parties involved. The abstraction process is a collection of logical operations that lead to such a result (abstraction). Applications of abstraction can be the indelible concepts that people use in science and in everyday life.

Nutrition, which in objective activity is seen as an abstract work of the mind and what the mind is drawn from, in each given condition, depends on the nature of the object under investigation, and Id the order of investigation. In the course of its historical development, science moves from one level of abstraction to another, greater one. The development of science in this aspect is, in the words of W. Heisenberg, “the larynx of abstract structures.” The greatest development in the sphere of abstraction was completed when people mastered the rakhunok (number), thereby opening the path that led to mathematics and mathematical natural science. In connection with this, W. Heisenberg says: “Understand that the beginning of abstraction in the form of concrete evidence leads to a rich life. The smells appear more powerful and productive, but it will be possible to detect the beginning. the development of the stench is revealed by powerful constructive possibilities: the stench is hidden "To encourage new forms and to understand, to make connections with them, and perhaps at the earliest stages of stagnation in our attempts to understand the world of phenomena."

A short analysis allows us to confirm that abstraction is one of the most fundamental cognitive logical operations. Therefore, it is the most important method of scientific investigation. Using the method of abstraction, tightly knitting, and the method of embossing.

Uzagalnennya - a logical process and the result of a clear transition from singular to cogal, from less cogal to cogal.

Scientific research is not just a clear vision and synthesis of similar signs, but an insight into the essence of speech: the understanding of the one in the diverse, the hidden in the single, the natural in the unique, as well as the unification of objects behind similar authorities or connections together in similar groups, classes .

In the process of formalization there is a transition from single ones to understandable ones, from lesser ones. the wicked ones understand- to more obscure, from single thoughts - to more obscure, from thoughts of lesser strength - to thoughts of greater strength. Examples of such delineation can be: a clear transition from the concept of “mechanical form of the flow of matter” to the concept of “form of the flow of matter” and “rukh”; from the concept of “Yalina” to the concept of “Coniferous Roslina” and Vzagali “Roslina”; from the judgment “this metal is electrically conductive” to the judgment “all metals are electrically conductive.”

In scientific research, the following types of inference are most often encountered: inductively, if the investigator comes from the same (single) facts, leading up to their own indirect expression in thoughts; It is more logical if the follower goes from one, less ignorant, thought to another, more ignorant one. Between the legalized ones there are philosophical categories that cannot be legalized, the fragments of the stench of the generic concept linger.

The logical transition from a halal thought to a less halal one is the process of exchange. Otherwise, it is a logical operation, a gateway.

It is necessary to emphasize that the origin of people, before abstraction and isolation, was formed and developed on the basis of husband practice and the mutual aggregation of people. It is of great importance both in the cognitive activity of people and in the fundamental progress of the material and spiritual culture of marriage.

Induction (from Latin i nductio - guidance) - a method of scientific knowledge, in which Zagalny visnovokє knowledge about all classes of objects, obtained as a result of tracing the surrounding elements of this class. In induction, the thought of the preslednik goes from private, singular, through particular to zagal and zagal. Induction, as a logical method of investigation, is associated with the formalized results of precautions and experiments, with the flow of thought from the singular to the halal. The fragments will always be unbroken and renewed, then inductive conclusions will always be of a problematic (irregular) nature. Inductive reasoning is considered as evidence of truth and empirical laws. Indirectly under induction there is a repetition of the phenomena of real effectiveness and their sign. Revealing similar rices in many subjects of the singing class, we come to the conclusion that these rices are attached to all subjects of this class.

Based on the nature of the circuit, the following main groups of inductive circuits are divided:

1. Full induction - such a concept, which has a hidden concept about the class of objects, to work on the basis of induction of all objects of this class. Repeated induction provides reliable findings, through which it is widely regarded as evidence of scientific research.

2. Nepovynaya induction - such a vynovok, from which the hidden vynovok is removed from the forces, which does not consume all objects of its class. There are two types of indirect induction: popular and induction through simple overexposure. Vaughn is a concept that has a hidden concept about the class of objects to work on this basis, so that among the guarded facts there is no way to understand the hidden one; science, i.e. vysnovok, who has a secret knowledge about all subjects in the class to work on the basis of knowledge about the necessary signs or causal links in some subjects of this class. Scientific induction can be provided not only by proven evidence, but also by reliable evidence. Scientific induction uses traditional methods of learning. On the right, it is very difficult to establish the causal link between the phenomena. However, in a number of cases, this connection can be established using logical techniques, which are called methods of establishing a causal link, or methods of scientific induction. There are five such methods:

1. Method of uniform similarity: if two or more types of the identified phenomenon may be hidden in just one furnishing, and all other furnishings are different, then one similar furnishing is the cause of this phenomenon:

Otzhe -+ A is the reason a.

Otherwise, if the front elements ABC call out the elements ABC, and the elements ADE - the elements ade, it is difficult to understand that A is the cause of a (whether the element A is causally connected).

2. The method of a single consideration: the types of episodes in which a symptom occurs and occurs are divided into only one: - the first condition, and other conditions are the same, but one condition is the cause of this phenomenon:

Otherwise, it seems that the front side of ABC calls out the phenomenon ABC, and the side of the BC (revelation A wears off during the course of the experiment) calls out the phenomenon nd, to be wary of the fact that this is the reason a. The basis of such a substitution is the understanding and with the removal of A.

3. This method combines similarity and importance with a combination of the first two methods.

4. The method of simultaneous changes: if the guilt or change of one item immediately calls for the same change of another item, then the insults and items have a causal link one after another:

Zmina A change

Nezmina V, S

Well, A is the reason for a.

Otherwise, when changing the front box A, the box A changes, and the other front boxes are no longer unchanged, it can be arranged, which is the reason for A.

5. Surplus method: it is clear that the cause of the observed phenomenon is not the necessary furnishings, except for one, but one furnishings and, perhaps, the cause of this phenomenon. Using the Vikorist method, the French astronomer Nevere transferred the foundation of the planet Neptune to the German astronomer Halle.

The considered methods of scientific induction to establish causal links are most often not isolated, but rather interconnected, in addition to one another. Its value lies primarily in the level of reliability of installation that another method provides. It is important that the strongest method is the method of sublimity, and the weakest is the method of similarity. The other three methods occupy an intermediate position. This difference in the value of methods is based on the fact that the method of similarity is mainly based on caution, and the method of perseverance is based on experiment.

A short description of the induction method allows you to determine its worthiness and importance. The significance of this lies in close connection with facts, experiment, and practice. In connection with this, F. Bacon wrote: “If we can respectfully penetrate into the nature of speeches, then we always resort to induction. We respect that induction is the correct form of proof, which protects seemingly any pardon, scho closely follow nature, and may even get angry with practice."

In modern logic, induction is interpreted as a theory of self-response. Try to formalize the inductive method with the understanding of the ideas of the theory of plausibility, which will help to more clearly understand the logical problems of this method, and help to appreciate its heuristic value.

Deduction (Vіd Latv. Deductio - deduction) - a rational process in which knowledge about an element of class is derived from the knowledge of the hidden authorities of the entire class. In other words, the thought of the leader in deduction goes from the hidden to the private (alone). For example: "All planets Sonyachna system collapsing in front of the Sun"; "Earth is a planet"; also: "The Earth is collapsing in front of the Sun." Whose idea is collapsing from the obscure (first post) to the private (visnovok). This subject is of significance to all classes.

The objective basis of the deduction is that the leather object has the same unit of one and the other. This connection is inseparable, dialectical, which allows one to understand the unique knowledge of the foreign language. Moreover, since the conclusions of the deductive conclusion are true and correctly related to each other, the conclusions will inevitably be true. With this feature, deduction clearly differs from other methods of knowledge. On the other hand, the underlying principles and laws do not allow the investigator to get lost in the process of deductive learning; they help to correctly understand the reality behind the scenes. It would be wrong on this basis to overestimate the scientific significance of the deductive method. And in order for the formal power to come into its own, the required output knowledge, hidden conclusions that are explored in the process of deduction, and when they arise, the science has a task of great complexity.

The important cognitive significance of deduction is manifested when the role of the initial proposition is played by a difficult inductive explanation, and as a hypothetical assumption, for example, a new scientific idea. And here deduction is the starting point for the emergence of a new theoretical system. The theoretical knowledge created by such a path creates the basis for new inductive developments.

All this creates real changes in the sustainable growth of the role of deduction in scientific research. Science is increasingly confronted with such objects that are inaccessible to sensitive perception (for example, the microworld, the universe, the end of humanity, etc.). When learning about such objects, it is much more often necessary to resort to the power of thought, rather than to the power of caution and experiment. Deduction is indispensable in all knowledge, where theoretical provisions are formulated for descriptions of formal or not real systems, for example, in mathematics. While formalization in modern science is becoming increasingly stagnant, the role of deduction in scientific knowledge is growing exponentially.

However, the role of deduction in scientific research cannot be absolutized, and thus contrasted with induction and other methods of scientific knowledge. Unacceptable extremes of both metaphysical and rationalistic nature. However, deduction and induction are closely interrelated and complement each other. Inductive investigation conveys a variety of indirect theories, laws, principles, which includes the moment of deduction, and deduction is impossible without the infernal provisions supported by the inductive way. In other words, induction and deduction are interconnected in the same way as analysis and synthesis. You need to try to put the skin on them in your place, and this can only be achieved by not forgetting about their connections with each other, their mutual complementarity. “Great discoveries,” says L. de Broglie, “strikes of scientific thought are created forward by induction, risky, or truly creative methods... Of course, there is no need to elaborate on those that the savvy of deductive deliberation is not necessary and values. Having fallen into deception, it is only possible, after establishing the induction of new points of departure, to draw conclusions and establish conclusions about the facts. Only one deduction can ensure the verification of hypotheses and serve as valuable counteractions Rue against the superworld of fantasy that played out." With such a dialectical approach, the skin from the knowledge and other methods of scientific knowledge can again demonstrate all its advantages.

Analogy. Because of the power, signs, connections of objects and manifestations of real activity, we cannot recognize them at once, in general, but their behavior changes, revealing step by step new and new power. Having learned from the powers of an object, we can see that they are afraid of the powers of another, good subject. Having established such similarity and revealed a sign that is anonymously avoided, it can be assumed that other authorities on these subjects are also avoided. The course of such mercilessness becomes the basis of analogy.

Analogy is such a method of scientific investigation, after the help of certain types of similarity of objects of a given class in some characters, to learn about their similarity in other characters. The essence of the analogy can be deduced from the following formula:

And there are signs of aecd

May signs ABC

So, perhaps, it is a sign of d.

Otherwise, in analogy, the investigator’s idea goes from the knowledge of the known strength to the knowledge of the same strength, or, in other words, from the private to the private.

However, specific objects of the concept, based on analogy, are of a less than plausible nature: this is one of the elements of scientific hypotheses, inductive merchandising, and they play with scientific findings. For example, the chemical warehouse of the Sun is similar to the chemical warehouse of the Earth for a lot of signs. Therefore, if the Sun discovered the element helium, not yet known to the Earth, then by analogy they created a concept that such an element can exist on the Earth. The correctness of whose origin was established and confirmed later. In a similar manner, L. de Broglie, having assumed a similarity between the parts of the speech and the field, elaborated on the nature of the parts of the speech.

To increase the reliability of the principles behind the analogy, it is necessary to follow:

    were revealed as external authorities of the compared objects, and the main ones were internal;

    the objects were similar in the most important and common signs, and not in the casual and other ones;

    the sign of the bulo is the brightest;

    were insured for similarity and similarity so that the remainder would not be transferred to another facility.

The analogy method gives the most valuable results when an organic relationship is established between similar signs, and from this sign, which is transferred to the follow-up object.

The truth of the findings by analogy can be compared with the truth of the findings by the method of indirect induction. In both cases, it is possible to identify reliable findings, but only if the skin from these methods is not isolated from other methods of scientific knowledge, but in an integral dialectical connection with them.

The method of analogy, which is understood to be extremely broad, as the transfer of information about one object to another, becomes the epistemological basis of modeling.

Modeling - a method of scientific knowledge, in addition to the modification of an object (the original), it involves the process of creating a copy (model), which replaces the original, which is then learned from the old stories, which is to say Idnik.

The essence of the modeling method lies in the creation of the authorities of the object of study on a specially created analogue of the model. What is the model?

Model (from Latin modulus - world, image, norm) is a mental image of any object (original), a simple way of expressing powers, connected objects and manifestations of real activity on the basis of analogy, establishing similarities between them that on This is based on their creation on a material or ideal object-like basis. In other words, the model is an analogue, a “protector” of the original object, which in known practice serves to add and expand knowledge (information) about the original by constructing, transforming or managing the original.

Between the model and the original there may be similarities (similarity): physical characteristics, functions, behavior of the object being processed, its structure, etc. This similarity itself and permission є transfer the information captured as a result of tracking the model to the original.

Since modeling is very similar to the method of analogy, there is a logical structure of reasoning behind analogy as an organizing factor that brings all the moments of modeling into a single directive process. We can say that in this case modeling is a type of analogy. The analogy method serves as a logical basis for principles that should be used during modeling. For example, on the base of the model A, the sign abcd is also due to the original A.

Various modeling is dictated by the need to reveal such aspects of objects that cannot be touched by the path of indiscriminate conversion, or that cannot be seen from the daily economical market. People, for example, cannot completely prevent the process of natural creation of diamonds, the origin and development of the life of the Earth, the low phenomena of the micro- and mega-world. Therefore, one has to go to the point of individually creating such items in a form that is manually designed for care and fortification. In a number of cases, it is more convenient and economical to use and adopt a model instead of extensive experimentation with the object.

The modeling is widely stagnant until the development of the trajectory of ballistic missiles, to the mode of robotic machines and to the development of entire enterprises, social and management enterprises, in the division of material resources, in the future. women's life processes in the body, in the marriage.

Models that exist in everyday and scientific knowledge are divided into two great class: verbal, material, logical (thoughtful), ideal. The first are natural objects that obey natural laws in their functioning. The stench, in a more minimal form, materially illustrates the subject of investigation. Logical models are ideal structures, fixed in a typical symbolic form and functioning according to the laws of logic and mathematics. The more important significance of iconic models lies in the fact that behind the help of other symbols it is possible to reveal such connections and aspects of action that are practically impossible to reveal in other ways.

At the stage of scientific and technological progress, computer modeling has become increasingly widespread in science and practice. A computer that runs a special program that models various processes, for example, the rise of market prices, population growth, the arrival and entry into orbit of a special satellite of the Earth, chemical reactions etc. The investigation of this skin process is carried out using an additional computer. yuternoy model.

System method . The current stage of scientific knowledge is characterized by the ever-increasing importance of theoretical thought and theoretical sciences. An important place among the sciences is occupied by systems theory, which analyzes systemic methods of investigation. The systemic method of knowledge has the most adequate expression of dialectics and the development of objects and manifestations of real activity.

The systematic method is a set of scientific methodological principles and research methods, which are based on an orientation towards revealing the integrity of an object as a system.

The basis of the systemic method is the system and the structure that can be meaningful in this way.

System (from the Greek systema - whole, composed of parts; connection) - this is a fundamental scientific formation that expresses the totality of elements that are interdependent with each other, both with the middle and which create the song of the whole Existence, duration of tracking of the target object. The types of systems are very different: material and spiritual, inorganic and living, mechanical and organic, biological and social, static and dynamic, etc. Moreover, any system is a totality There are different elements that form its song structure. What is structure?

Structure ( in Latvian structura – budova, roztashuvannya, order) – this is a clearly stable method (law) of connecting the elements of an object, which ensures the integrity of this or any other folding system.

The specificity of the system approach is determined by the fact that it focuses on discovering the integrity of the object and ensuring its mechanisms, identifying different types of ligaments of a foldable object. This brings them together into a single theoretical picture.

The main principle of the halal theory of systems is the principle of systemic integrity, which means the view of nature, including life, as a great and complex system that breaks down into subsystems that stand for great minds as being richly independent systems

All the diversity of concepts and approaches in the fundamental theory of systems can, at the first stage of abstraction, be divided into two great classes of theories: empirical-intuitive and abstract-deductive.

1. In empirical-intuitive concepts, as the primary object of investigation, concrete, really basic objects are considered. In the process of convergence from the concrete to the extreme, the concepts of systems and systemic ambushes for tracking different levels are formulated. This method is very similar to the transition from singular to formal empirical knowledge, but behind the external similarity one hopes for humility. The point is that while the empirical method comes from recognizing the primacy of elements, the systemic approach comes from recognizing the primacy of systems. The systemic approach, as a beginning of investigation, accepts systems as a whole, which consists of impersonal elements together with their connections and veins, which are subject to ancient laws; The empirical method involves formulating laws that express the relationships between elements of this object Whose level of revelations. And although these laws have a moment of strength, it is given strength, prote, to be assigned to the high school class of the most important objects of the same name.

2. In abstract-deductive concepts, abstract objects - systems, which are characterized by borderline obscure powers and values ​​- are taken as the outgrowth of investigation. Further convergence from borderline systems to more specific ones is simultaneously accompanied by the formulation of such systemic principles, which formulate to specific classes of systems.

The empirical-intuitive and abstract-deductive approaches are however legitimate, they cannot be compared to one another, and, in fact, their comprehensive study reveals extremely great cognitive possibilities.

The systems method allows for a scientific interpretation of the organization of systems. The objectively present light stands like the light of the singing systems. Such a system is characterized by the presence of interdependent components and elements, as well as their ordering, organization and regulation of the entire set of laws. That is why systems are not chaotic, but orderly and organized.

The tracking process can, of course, “go” from elements to entire systems, as well as from entire systems to elements. However, in all circumstances, investigation cannot be strengthened from systemic ligaments and centimeters. Ignoring such connections will inevitably lead to one-sided or destructive connections. It is unusual that history has a straightforward and one-sided mechanism in the explanation of biological and social phenomena, consistent with the position of a recognized first impulse and spiritual substance.

Based on what has been said, you can see the following main benefits of the systemic method:

Revealing the importance of the skin element in its place and function in the system with the assurance that the power of the whole is not reduced to the sum of the powers of its elements;

Analysis of the extent to which the behavior of the system is determined by the characteristics of its neighboring elements and the powers of its structure;

Investigation of the mechanism of mutual dependence, interaction between the system and the middle;

Appreciation of the hierarchical nature of the power of this system;

Providing a multiplicity of descriptions for a richly-aspected maintenance of the system;

A look at the dynamism of the system, revealing its integrity as it develops.

An important concept of the systems approach is the concept of “self-organization”. It characterizes the process of creation, creation or detailed organization of a foldable, open, dynamic system that develops itself, the connections between the elements of which are not harsh, but of a truly powerful nature. The power of self-organization is attached to various objects of different nature: living cells, organisms, biological populations, human groups.

The class of systems built before self-organization is open and nonlinear systems. The openness of the system means the presence of drains and drains, the exchange of speech and energy with dovkillam. However, not every open system self-organizes, there will be structures, but everything is based on the union of two cobs - the base that creates the structure, and the base that grows, which spreads this cob.

In modern science, systems that self-organize have a special subject of study: synergetics - the fundamental scientific theory of self-organization, focused on the search for the laws of evolution in critical unevenly important systems of any basic nature. and the basis - natural, social, cognitive (cognitive).

At this time, the systemic method is gaining more and more methodological significance in the most common natural, historical, psychological and other problems. He is widely admired by almost all sciences, which is due to the urgent epistemological and practical needs of the development of science at the current stage.

International (statistical) methods - these are the methods that rely on the action of multiple variable factors, which are characterized by a stable frequency, which makes it possible to identify the need to “break through” the totality of multiple fluctuations.

International methods are formed on the basis of the theory of universality, which is often called the science of bias, and among many scientists, conformity and bias are practically inseparable. The categories of consumption and volatility are completely outdated, however, their role as modern science has grown immeasurably. As the history of knowledge has shown, “we are now beginning to properly evaluate the significance of each number of problems associated with necessity and randomness.”

For clarity world-class methods It is necessary to look at their basic concepts: “dynamic patterns”, “statistical patterns” and “humanity”. The two types of regularities identified differ depending on the nature of the transfers.

The laws of the dynamic type of transfer are of an unambiguous nature. Dynamic laws characterize the behavior of isolated objects that are formed from great amount elements that can be abstracted from a whole series of random officials, which makes it possible to more accurately transfer, for example, classical mechanics.

In statistical laws, the transfer is of an unreliable and less than reliable character. A similar nature of the transfer of knowledge to the faceless and random factors that may take place in statistical phenomena or mass phenomena, for example, a large number of molecules in a gas, a large number of individuals in populations, a large number of people in great groups ah i etc.

The statistical regularity arises as a result of the interaction of a large number of elements, a storage object - a system, and this is characterized not so much by the behavior of the surrounding element as by the underlying object. The necessity that appears in statistical laws arises from the mutual compensation and equal importance of the independent factors. “Although statistical regularities can be brought to a firm conclusion, the degree of certainty of such tables is high, which is in line with reliability, against the principle of possible future errors.”

Statistical laws, which want to give unambiguous and reliable transfers, are only possible for mass phenomena of a random nature. Behind the cumulative effect of various factors of an episodic nature, which is practically impossible to untangle, statistical laws reveal a constant, necessary repetition. The stench is confirmed by dialectics, the transition from the fading to the necessary. Dynamic laws are a borderline variation of statistical ones, as long as reliability becomes practically reliable.

Integrity is a concept that characterizes the world (stage) of the possibility of the emergence of certain phenomena for the singing minds, which can be repeated a lot. One of the main tasks of the theory of compatibility lies in the established patterns that arise when a large number of confounding factors interact.

Scientific-statistical methods are widely used in mass applications, especially in such scientific disciplines as mathematical statistics, statistical physics, quantum mechanics, cybernetics, synergetics.

3.5.1. Imaginative-statistical method of investigation.

In many cases, it is necessary to monitor not only deterministic, but also random, global (statistical) processes. These processes are viewed within the framework of the theory of balances.

The totality of the variable value x becomes the primary mathematical material. When taken together, there are a lot of similar ideas. The totality that accommodates different variations of the mass phenomenon is called the general totality, or great choice N. Consider counting only a part of the general population, which is called elective aggregate and small election.

Imovirnistyu P(x) podіi X name the ratio of the number of episodes N(x), how to bring it to the present day X, before original number possible episodes N:

P(x)=N(x)/N.

Theory of Virality examines the theoretical division of fall values ​​and characteristics.

Mathematical statistics deals with methods of processing and analysis of empirical approaches.

These two competing sciences become one mathematical theory massive episodic processes that are widely used for the analysis of scientific research.

It is quite common to confuse the methods of reliability and mathematical statistics in the theory of reliability, vitality and security, which is widely advocated in various fields of science and technology.

3.5.2. Method of statistical modeling and statistical testing (Monte Carlo method).

This method is a numerical method for increasing folded tasks and bases on a vicoristic vipadkovyh numbers, which model international processes. The results of this method make it possible to empirically establish the latency of the follow-up processes.

The implementation of the Monte Carlo method is more effective than in the case of the Swedish EOM. To perform a successful task using the Monte Carlo method, it is necessary to calculate the statistical series, know the law of its division, and have an average mathematical calculation. t(x), mid-square vigour.

With this help, you can remove the accuracy of the solution that has been previously set, then.

-> t(x)

3.5.3. System analysis method.

Under systemic analysis, we understand the totality of techniques and methods for integrating folding systems, which is a complex set of elements interacting with each other. The interaction of system elements is characterized by direct and reverse connections.

The essence of system analysis lies in identifying these connections and establishing their influence on the behavior of the entire system. The most complete and profound way to achieve system analysis is through the methods of cybernetics, which is the science of complex dynamic systems that capture, save and process information using the method of optimization and management.

System analysis consists of four stages.

The first stage lies in setting the task: identifying the object, the purpose of the task, as well as the criteria for developing the object and managing it.

At the next stage, the interconnected systems are identified and their structure is identified. All objects and processes that may be related to the set goal are divided into two classes - a well-understood system and a modern middle ground. Separate closedі open system. When monitoring closed systems, there is no problem with the flow of water from their behavior. Then we see the edges of the warehouse system - its elements, establish the interaction between them and the external middle.

The third stage of system analysis involves the developed mathematical model of the monitored system. First, describe the system’s parameterization, describe the main elements of the system and the elementary ones on it with the help of these and other parameters. In this case, the parameters that characterize continuous and discrete, deterministic and independent processes are distinguished. It is important to use the same mathematical apparatus depending on the peculiarities of the processes.

As a result of the third stage of system analysis, completed mathematical models of the system are formed, described by a formal, for example algorithmic, model.

At the fourth stage, we analyze the established mathematical model, find out its extreme minds using methods for optimizing processes and managing systems, and formulate principles. The optimization assessment is carried out according to the optimization criterion, which takes extreme values ​​(minimum, maximum, minimax).

Instead, choose one criterion, and for others set the marginally acceptable values. Sometimes you can set mixed criteria, which is a function of the first parameters.

The chosen optimization criterion is determined by the relevance of the optimization criterion to the parameters of the model of the object (process) under investigation.

There are various mathematical methods for optimizing tracking models: methods of linear, nonlinear and dynamic programming; methods are highly statistical and based on the theory of mass service; the igor theory, which considers the development of processes as episodic situations.

Knowledge for self-control

Methodology of theoretical research.

The main parts of the stage of theoretical development of scientific research.

Types of models are the types of modeling of the research object.

Analytical methods of investigation.

Analytical methods of investigation from experimental experiments.

Internationally analytical method of investigation.

Methods of static modeling (Monte Carlo method).

Method of system analysis.

A group of methods is considered that are the most important in sociological research, and these methods should be used in every sociological research that is important to scientific research. The stench is directly identified in empirical information and statistical patterns, then. patterns that follow “the average”. In power, sociology is concerned with the teachings of the “average people.” In addition, another important metastasis of scientific and statistical methods in sociology is the assessment of the reliability of the sample. How likely is it that sampling yields more or less accurate results, which is a loss of statistical power?

The main object of study with the use of scientific and statistical methods is random values. Accepted by the fall value of the actual value є with a sudden fall- I imagine that with the current state of mind, things may or may not come to pass. For example, if a sociologist conducts research in the sphere of political changes on the streets of the city, then the idea “the same respondent turned out to be a henchman of the ruling party” is correct, since nothing in the respondent has ever seen any political changes. Since the sociologist studied the respondent from the former Regional Duma, this is not the case. Vipadkova's podia is characterized internationality Yogo nastannya. On the rise of classic works on dice cubes While the card combinations that are studied within the framework of the theory of self-esteem are not so easy to calculate in sociological studies.

The most important basis for an empirical assessment of its reliability is increasing the frequency to the highest level, depending on the frequency of understanding the situation, how many times it became possible before how many times it theoretically could have happened. For example, if out of 500 random respondents on the streets of the city, 220 respondents turned out to be supporters of the ruling party, then the frequency of such respondents becomes 0.44. At times the representative sample is large enough We reject the universality of the concept and the general part of people who may have a given sign. In our sample, it is clear that approximately 44% of city residents are supporters of the ruling party. It is clear that the fragments of food are in the constant care of the townspeople, and the actions of the food process could have been lost, resulting in a total robbery.

Let's take a look at the secrets that emerge during the statistical analysis of empirical data.

Estimation of the subdivision of the value

If this sign can be assessed very carefully (for example, the political activity of a citizen is a magnitude that shows how many times in the last five years he took part in the elections of different regions), then the task can be set Follow the law of division of these signs as a type of variable value. In other words, the law of division shows which value increases more often, and which increases less often, and how often/slowly. Most often, as in technology and nature, so in marriage there is a convergence normal law. This formula for power was laid out by any assistant in statistics, and in Fig. 10.1 shows the appearance of the graph – there is a “ringing-like” curve, which can be more “bent” uphill or more “smeared” along the axis of the value of the falling value. The essence of the normal law is that most often the peak value accumulates near a certain “central” value, which is called mathematical studies and the further away from the new one, the sooner the value “gets” there.

The butts of the divisions, which can be treated as normal with little damage, are abundant. Back in the 19th century. The Belgian teachings of A. Quetelet and the English F. Galton concluded that the distribution of frequencies of the people, whether demographic or anthropometric indicators (living life, growth, age of love, etc.) is characterized by a “ringing-like” distribution. After all, F. Galton and his successors brought to the conclusion that psychological characteristics, for example, specialties, are subject to the normal law.

Small 10.1.

butt

The most beautiful example of a normal division in sociology is the social activity of people. It is likely that according to the law of normal division, there are about 5–7% of socially active people in the marriage. Everything is social active people go to rallies, conferences, seminars, etc. Approximately the same amount of sunburn decreases as a result of participation in social life. The majority of people (80–90%) are interested in pursuing the politics and matrimonial life, following the processes that follow, but do not show any significant activity. Such people miss most political news, but sometimes watch new news on TV or on the Internet. They also go to vote in the most important elections, especially when they are “threatened with a baggage” or “wanted a carrot”. The member of the Tsich 80–90% of the SPARITICAL PART MAZHAMA MarNIA, Ale to the centers of the social-legi-LISTRICHENSTS Tsilkom Tsіkavі, the qjolka bugato, and the perevans are not able to Ignoruvati. There is also a concern among scientific organizations that are conducting investigations into the transactions of political figures and trading corporations. І thought of the “gray mass” about the key factors related to the predicted behavior of rich thousands and millions of people in elections, as well as in hot political situations, in the split of marriages and conflicts of different genders There are no such forces in these centers.

Of course, not all the values ​​of division are beyond the normal division. Moreover, the most important in mathematical statistics are the binomial and exponential divisions, the Fisher-Snedecor division, Chi-square, and Student.

Rating link sign

The simplest example is when you just need to establish the presence/implicity of the connection. The most popular method for nutritionists is the “Chi-square” method. This method directly deals with categorical data. For example, the family camp clearly wants to become like this. These data appear to be numeric at first glance, but they can “transform” into categorical ways of dividing the interval into a number of small intervals. For example, work experience at a plant can be divided into the categories “less than one job,” “from one to three jobs,” “from three to six jobs,” and “more than six jobs.”

Let the parameter Xє P possible value: (x1,..., X r1), and the parameter Y-t possible value: (y1,..., at T) , q ij – frequency of bets that must be avoided ( x i, at j), then. There are many manifestations of the appearance of such a bet. We calculate the theoretical frequencies, then. How many times will the skin pair value appear for absolutely unrelated quantities:

Based on the observed and theoretical frequencies, the values ​​are calculated

It is also necessary to calculate the quantity steps of freedom behind the formula

de m, n– the number of entries in the category table. In addition, we select level of significance. I see it reliability We want to take away the lowest level of importance from our brothers. As a rule, a value of 0.05 is chosen, which means that we can trust the results with confidence of 0.95. Further, in the previous tables we know about the number of degrees of freedom and equal importance of critical importance. If so, then parameters Xі Y are respected by independent people. If so, then parameters Xі Y – fallow lands. If so, it is not safe to talk about the staleness and independence of authorities. It is time to conduct further investigations.

It is also important to note that the “Chi-square” criterion, with a very high percentage, can be used only if all theoretical frequencies are not lower than a given threshold, which is considered to be of equal importance 5. Let v - minimal theoretical to frequency. For v > 5 it is possible to use the Chi-square criterion. At v< 5 использование критерия становится нежелательным. При v ≥ 5 вопрос остается открытым, требуется дополнительное исследование о применимости критерия "Хи-квадрат".

Let’s bring the butt to the “Xi-square” method. For example, let’s say that some young local patients were tested at this place. football teams and obtained the same results (Table 10.1).

We can support the hypothesis about the independence of football as a young city N from the respondent’s article at a standard level of significance of 0.05. We calculate the theoretical frequencies (Table 10.2).

Table 10.1

Results of patient testing

Table 10.2

Theoretical frequencies

For example, the theoretical frequency for young patients-patients of Zirka is determined as

similarly – other theoretical frequencies. Next we calculate the values ​​of “Chi-square”:

The number of steps of freedom is significant. For a significance level of 0.05, the value is considered to be more critical:

Oskolki, and the advantage of Suttev, one can practically say in a sing-song manner that the football achievements of young men and girls of the place N vary greatly, except for the occurrence of an unrepresentative sample, for example, if the investigator did not select the sample from different areas of the city, limiting himself to the study of respondents in his own quarter.

The situation is complicated - if you need to carefully assess the strength of the connection. In someone's life, the method often gets stuck correlation analysis. These methods are clearly seen in the most advanced courses in mathematical statistics.

Approximation of deposits based on point data

Let's dial points - empirical data ( X i, Yi), i = 1, ..., P. It is necessary to approximate the real value of the parameter at type of parameter X, and also the rule of calculation of values y, if X be between two “knots” Xi.

There are two principles of different approaches to achieving the task at hand. The first is that, from the function of a given family (for example, polynomials), a function is selected whose graph passes through certain points. Another approach does not force the graph of the function to pass through the points. The most popular method in sociology and other sciences is least squares method- be transferred to another group of methods.

The essence of the method of least squares is the y axis. Given a family of functions at(x, a 1, ..., A t) h m unimportant coefficients. It is necessary to select an unidentified coefficient for the solution of the optimization task

Minimum function values d can act as a world of proximity accuracy. If this value is too large, then select another class of functions at or expand the vikolistovy class. For example, since the class “polynomials of stage not greater than 3” does not give a satisfactory accuracy, we take the class “polynomials of stage not greater than 4” or, alternatively, “polynomials of stage not greater than 5”.

The most common method is to use it for the family of polynomials, the stage is no greater N":

For example, when N= 1 is a family of linear functions, with N=2 - a family of linear and quadratic functions, with N=3 - a family of linear, quadratic and cubic functions. Let's go

The coefficients of the linear function ( N= 1) they are wondering about the unraveling of the system of linear rulers

Coefficients of the function as seen A 0 + a 1x + a 2X 2 (N= 2) they joke about the decision of the system

It is necessary to use this method for sufficient value N can be developed by improving the regularity behind which the system of alignments is formed.

Let's bring the butt to the least squares method. Let the number of the vocal political party change as follows:

Please note that you will change the party numbers for different fates do not vary much, which allows us to approximate the occurrence of a linear function. To make it easier to calculate the replacement X– fate – let’s introduce change t = x - 2010, tobto. The first word in the form of numbers can be taken as “zero”. Calculable M 1; M 2:

Now we calculate M", M *:

Coefficients a 0, a 1 functions y = a 0t + A 1 are calculated as the connection between the system of ranks

Based on this system, for example, using Cramer’s rule or the substitution method, we can extract: A 0 = 11,12; A 1 = 3.03. In this manner, we reject closeness

This allows you not only to operate on one function instead of a set of empirical points, but also to calculate the values ​​of the function that go beyond the output data - “transfer the next day”.

It is also important to note that the method of least squares can be used not only for polynomials, but also for other families of functions, for example, for logarithms and exponentials:

The level of confidence of the model based on the method of least squares, can be based on the "R-squared" or the coefficient of determination. VIN is calculated as

Here . Chim closer R 2 to 1, this is an adequate model.

Revealed by Wikidiv

A number of data indicate an anomalous value that is sharply visible in the middle sample or in the background series. For example, let hundreds of citizens of the region who positively rely on the active politician become a leader in 2008–2013. consistently 15, 16, 12, 30, 14 and 12%. It is easy to note that one of the values ​​differs sharply from all others. In 2011 The politician’s rating seems to have sharply increased the primary values, which were in the range of 12–16%. The presence of defects may be affected by a variety of reasons:

  • 1)mercy to the vimir;
  • 2) the nature of the input data is unnatural(for example, if the average number of votes cast by a politician is analyzed, these values ​​in the electoral division of the military unit can significantly differ from the average value in the locality);
  • 3) heir to the law(Values ​​that vary sharply from other values ​​can be determined by a mathematical law - for example, in the case of a normal division, a sample can lose an object with values ​​that sharply differ from the average);
  • 4) disasters(for example, during a short, or even acute, political period, the level of political activity of the population may change dramatically, as happened during the “color revolutions” of 2000-2005 and the “Arab Spring” of 2011);
  • 5) poured in(for example, since in the past the previous investigation of politics has praised even more popular decisions, then the fate of your rating may appear significantly higher than in other fates).

There are a lot of methods for analyzing data that is not stable before elimination, so for effective stagnation it is necessary to clean the data before elimination. The brightest example of the unstable method is the method of least squares. The simplest method searching for a reason for the so-called interquartile division. Meaningful range

de Q m significance T- th quartile. If each member of the series does not consume the range, it is regarded as a divergence.

Let's explain with an example. The sense of quartiles is that they divide the row into equal and approximately equal groups: the first quartile “reinforces” the left quarter of the row sorted by growth, the third quartile is the right quarter of the row, the other quartile is in the middle. Let me explain, yak shukati Q 1, i Q 3. Let the number series sorted by growth P meaning Yakshcho n + 1 is divisible by 4 without excess, then Q k essence k(P+ 1) / 4th member of the row. For example, given a series: 1, 2, 5, 6, 7, 8, 10, 11, 13, 15, 20, there are a lot of members n = 11. Todi ( P+ 1) / 4 = 3, then. first quartile Q 1 = 5 - third member of the row; 3( n + 1) / 4 = 9, then. third quartile Q: i = 13 - ninth member of the series.

Trochi folding fall, if n + 1 is not a multiple of 4. For example, given the series 2, 3, 5, 6, 7, 8, 9, 30, 32, 100, where is the number of terms P= 10. Todi ( P + 1)/4 = 2,75 -

position between another member of the row (v2 = 3) and the third member of the row (v3 = 5). Then we take the value 0.75v2 + 0.25v3 = 0.75 3 + 0.25 5 = 3.5 - ce i will Q 1. 3(P+ 1) / 4 = 8.25 - position between the eighth member of the row (v8 = 30) and the ninth member of the row (v9 = 32). We take the value 0.25v8 + 0.75v9 = 0.25 30 + + 0.75 32 = 31.5 - ce i will Q 3. There are other options for calculation Q 1 ta Q 3, but it is recommended to use the deposit option.

  • Strictly apparent, practically speaking, the “closer” normal law is becoming closer - the fragments of the normal law are calculated for a continuous value along the entire operational axis, but there are many real values ​​that cannot easily satisfy the authorities of the normally distributed values.
  • Naslidiv A. D. Mathematical methods of psychological investigation. Analysis and interpretation of data: beginning, companion. St. Petersburg: Mova, 2004. pp. 49-51.
  • About the most important divisions of the phenomenal magnitudes of the marvels. Orlov A. I. Mathematics at a glance: science and statistics - basic facts: navch. additional help. M: MOZ-Press, 2004.
Share with friends or save for yourself:

Vantaged...