Mathematical modeling and analysis of adaptive methods for forecasting exchange rates. Mathematical methods of forecasting (Department of Computer Science and Technology, Moscow State University)

Mathematical forecasting methods can be developed based on various functions, time series and analytical dependencies. For mathematical modeling and forecasting of foreign exchange markets, the input information can be price dynamics and its derivatives (indicator values, significant levels etc.), and market macroeconomic indicators. Mathematical models for forecasting financial time series use price dynamics as input information. However, it is different when working with time series information models, which are descriptions of original objects using diagrams, graphs, formulas, drawings, etc. One of the most important types of information modeling is mathematical, when descriptions are formulated in the language of mathematics. Accordingly, the study of such models is carried out using mathematical methods.

Mathematically, the problem of forecasting the exchange rate can be reduced to the problem of approximating multidimensional functions and, therefore, to the problem of constructing a multidimensional mapping. Depending on the type of output variables, function approximation can take the form of classification or regression. Therefore, in forecasting models exchange rates, two major subtasks can be distinguished: 1. building a mathematical model; 2nd training of expert networks that implement the solution to the problem. As a result of studying the subject area, a mathematical forecasting model should be developed, including a set of input variables; a method for generating input features and a method for training an expert system.

Analytical dependencies

Let's look at the features forecasting models based on analytical dependencies.

This model is based on an analysis of the mechanism of exchange rate formation. The type of formula in this case will depend on the nature and type of interacting factors influencing the formation of the exchange rate. The model is based on the hypothesis about purchasing power parity. Further, in the process of considering real economic systems, new factors will be added, and the generalized model will select the main factors influencing the formation of the exchange rate.

Increasing the efficiency of short-term currency transactions is one of the important tasks in the activities of banks and other investors who sell and buy various currencies in significant volumes, trying to give movement to the available free reserves in order to avoid losses from market fluctuations in exchange rates and gain additional profit. Moreover currency operations are carried out at high speed via the Internet, since it is very important to enter the foreign exchange market with an offer before competitors. All this is part of the continuous process of forming an optimal structure of foreign exchange reserves.

The effectiveness of foreign exchange transactions significantly depends on the reliability of forecasts of currency fluctuations. That is why short-term forecasting of exchange rates is of great importance practical significance for the operational activities of banks and other investors. And the question of the possibility of using statistical methods for this purpose seems relevant and natural. Problem short term Forecasting exchange rates using statistical models is considered based on the fact that to successfully conduct foreign exchange transactions, it is necessary to obtain forecasts for one day in advance. As, for example, in the film “Pi”, mathematician Max Cohen has been trying for many years to find and decipher a universal digital code, according to which the rates of everyone change. As he approaches the solution, the world around Max turns into a dark nightmare: he is pursued by powerful analysts from Wall Street to discover the code of the universe. On the verge of madness, Max must do decisive choice between order and chaos and decide whether he is able to cope with the powerful force that his brilliant mind has now awakened. But this is fantastic. In reality, it is not hard work, but the train of thought that determines investment income, and only adequate mathematical modeling can serve to assess the effectiveness of the idea.

Adaptive forecasting methods

It is difficult to draw a clear line separating adaptive forecasting methods from non-adaptive ones. Already forecasting by extrapolation of ordinary regression curves contains a certain element of adaptation, when with each new acquisition of actual data the parameters of the regression curves are recalculated and refined. After a sufficiently long period of time, even the type of curve can be changed. However, here the degree of adaptation is very small; moreover, over time it falls along with an increase in the total number of observation points and, accordingly, with a decrease in the specific gravity of each new point in the sample.

The sequence of the adaptation process is as follows. Let the model be in some initial state, and a forecast is made from it. When one unit of time (modeling step) has expired, we analyze how far the result obtained from the model is from the actual value of the series. Prediction error through feedback it comes to the input of the system and is used by the model in accordance with its logic to transition from one state to another in order to better coordinate its behavior with the dynamics of the series. The model must respond to changes in the series with compensating changes. Then a forecast is made for the next point in time, and the whole process is repeated. Thus, adaptation is carried out interactively with each new actual point in the series obtained. However, what should be the rules for the transition of a system from one state to another, what is the logic of the adaptation mechanism?

In essence, this question is solved intuitively by every researcher. The logic of the adaptation mechanism is specified a priori and then verified empirically. When constructing a model, we inevitably endow it with innate properties and, at the same time, for greater flexibility we must take care of the mechanisms of conditioned reflexes, acquired or lost with a certain inertia. Their totality constitutes the logic of the adaptation mechanism. Due to the simplicity of each individual model and the limitations of the initial information, often represented by a single series, it cannot be expected that any one adaptive model is suitable for predicting any series, any variations in behavior. Adaptive Models They are quite flexible, but you can’t count on their versatility. Therefore, when constructing and explaining specific models it is necessary to take into account the most probable patterns of development of the real process, and correlate the dynamic properties of the series with the capabilities of the model. It is necessary to include in the model those adaptive properties that are sufficient for the model to track the real process with a given accuracy.

At the same time, one cannot hope for a successful self-adaptation of the model, more general in relation to the one that is necessary to reflect this process, because an increase in the number of parameters gives the system excessive sensitivity, leads to its swing and deterioration of the forecasts obtained from it. Thus, when building an adaptive model, one has to choose between a general and a particular model and, weighing their advantages and disadvantages, give preference to the one from which the smallest forecast error can be expected. Therefore, it is necessary to have a certain supply of specialized models, varied in structure and functional properties. To compare possible alternatives, a criterion for the usefulness of the model is needed. Although such a criterion is generally debatable, in the case of short-term forecasting the accepted criterion is usually the mean square of the forecast error. The quality of the model is also judged by the presence of autocorrelation in the errors. In more developed systems, the process of trial and error is carried out as a result of the analysis of both sequential in time and parallel (competing) modifications of the model.

Short-term forecasting of exchange rates

Information about the dynamics of exchange rates creates the impression of chaotic movement: falling and rising rates follow each other in some random order. Even if over a long period of time there is a trend, for example, towards growth, then on the chart you can easily see that this trend makes its way through complex movements exchange rate time series. The direction of the series changes all the time under the influence of irregular and often unknown forces. The object under study is fully exposed to the elements of the global market, and there is no accurate information about the future movement of the exchange rate. A forecast needs to be made. At the same time, it is quite obvious that predict even the sign of exchange rate growth very difficult. This is usually done by experts who analyze the current environment and also try to identify factors regularly associated with the movement of the exchange rate (fundamental analysis). When building formal models They are also trying to identify a range of significant factors and construct some kind of indicator on their basis, but neither practical experts nor formal methods have yet given good, sustainable results. We believe this is explained, first of all, by the fact that if there really is any range of factors that influence the exchange rate in a stable way, then their impact is reliably hidden by the imposed random component and control influences.

As a result, these factors and their influence are quite difficult to isolate. Therefore, it is necessary to consider short-term forecasting of the exchange rate as essentially the task of forecasting the sequential movement of an isolated time series, the cause of which is mainly the mass behavior in the foreign exchange market of small and large financial players who carry out the bulk of financial transactions with currency. This approach can be classified as Of course, an individual participant in the currency game is free to change his strategy completely arbitrarily. And yet it can be assumed that the behavior of the entire mass of participants through the relationship between supply and demand, affecting the exchange rate, has in the current period of time some certain dominant logic, revealed through the law of large numbers. For example, when the exchange rate of a currency falls, people may buy it, expecting a further increase in the exchange rate. And such massive demand for the currency really leads to an increase in its exchange rate. Or vice versa, if after a currency depreciation, confidence in it falls and its further depreciation is expected, then mass supply prevails and the rate falls even lower. Note that with this simplified approach, the dynamics of the time series itself can be read as a chronological record of the mass behavior of participants in the foreign exchange market. This makes it possible to build a model based on the series itself, without involving additional information, and use all discussions about the mass behavior of market participants only for qualitative interpretation. If it were possible to find in the dynamics of the series at least short-term patterns that are realized with a probability of more than 50%, then this would give reason to count on success. Then it would be possible to use statistical methods to forecast rates that capture more or less stable relationships between successive events in a time series.

In this case, the following task is posed. First, find out the applicability of any statistical methods for short-term forecasting of exchange rates, the purpose of which is to describe recurring events or situations characterized by relatively stable relationships. Secondly, if statistical methods are applicable to solve the problem, then establish their most promising class, indicate characteristics these methods, pay special attention to the simplest of them. Third, show practical results using an example. Let us note that the issues of forecasting exchange rates have always been given great attention. From publications on a similar topic, we point out, for example, the work of K. Granger and O. Morgenstern (Granger Clive W.J., Morgenstern Oscar. Predictability of stock market prices. Massachusetts, 1970), which examines the dynamics of stock prices and provides an extensive bibliography. This monograph actually concludes that if there is any of this kind in the series, it is most likely that it is between adjacent rate increases. However, the question arises whether we are trying to predict completely random fluctuations in exchange rates. The answer to this question is found in a special study.

Modern forecasting

A new look at the role of forecasting has established itself as an essential element of the decision-making process. The logical consequence of strengthening the role of forecasting was an increase in requirements for the validity and reliability of forecast estimates. However, the level of compliance of the modern prognostic apparatus with these new requirements remains extremely low. Even the use of adaptive models, with the help of which it is usually possible to achieve the required level of adequacy in the description of predicted processes, only partially solves the problem of increasing reliability. Modern economics generates processes with such complex dynamics that identifying its patterns using the apparatus of modern forecasting often turns out to be an insoluble task. Improving this apparatus, first of all, requires new ideas and new approaches, on the basis of which it is possible to implement mechanisms and methods for reflecting the dynamics formed under the influence of effects, the possibility of which in the future is not detected in the data of the historical period. A clear contradiction arises, the overcoming of which will contribute to the formation of a new look at forecasting as proactive reflection in a probabilistic environment ideas about the process under study in the form of a trajectory built on the basis of objective trends and subjective expectations.

Within the framework of economic forecasting, the development of the adaptive approach occurs in three directions. The first of them is focused mainly on complications adaptive forecasting models. The idea of ​​the second direction is improvement adaptive mechanism of forecasting models. In the third direction, the approach is implemented sharing adaptive principles and other forecasting methods, in particular, simulation modeling. The works of V.V. are devoted to the development of adaptive simulation models. Davnis.

Market development is determined, but the opposite is also true - fundamental factors are determined by the market, i.e. behavior of market participants, their assessments and expectations. At the same time, the ability to make a correct assessment of the development of market situations depends on the ability to anticipate the prevailing expectations of market participants, and not on the ability to predict changes in real world. Therefore, ideas for the development of a mathematical forecasting apparatus do not sufficiently take into account the properties of the activity of economic systems, which reduces the level of plausibility of forecast estimates even with high interpolation accuracy. At the same time, forecasts based only on subjective information are focused on predicting qualitative characteristics, and therefore their use is possible only in special cases. This brings to the fore the problem of constructing forecasts based on a combination of extrapolation and subjective estimates. Research has been carried out in this area, but analysis of the results of these studies has shown the predominance of a creative nature in them, which, in fact, indicates the initial level of development of the problem of constructing combined forecasts.

Literature

1. Sobolev V.V. Currency dealing in financial markets / South-Russia. state tech. University (NPI). – Novocherkassk, 2009. – 442 p.
2. Lukashin Yu. P. Adaptive methods for short-term forecasting of time series: Textbook. allowance. – M.: Finance and Statistics, 2003. – 416 p.
3. Davnis V.V., Tinyakova V.I. Adaptive models: analysis and forecast in economic systems. – Voronezh: Voronezh Publishing House. state Univ., 2006.– 380 p.
4. Mishkin F. Economic theory of money, banking and financial markets: Textbook for universities / Transl. from English D.V. Vinogradov, ed. M.E. Doroshenko. – M.: Aspect Press, 1999. – 820 p.
5. Lukashin Yu.P. On the possibility of short-term forecasting of exchange rates using the simplest statistical models // Bulletin of Moscow State University. -1990. — Ser. 6. Economics. -No. 1.-S. 75-84.
6. Sobolev V.V. Financiers / South-Russian state tech. University (NPI).–Novocherkassk, 2009.–315 p.
7. Soros J. Alchemy of finance: Translated from English. – M.: “Infra-M”, 1996. – 416 p.

Fortrader Suite 11, Second Floor, Sound & Vision House, Francis Rachel Str. Victoria Victoria, Mahe, Seychelles +7 10 248 2640568

Yurashev Vitaly Viktorovich Ph.D. Sc., scientific director of the company "Gradient"

Shelest Igor Vladimirovich System Architect at Jet Infosystems

Forecasting in business is important because of its possible use for stabilization effects. Reasonable forecasts encourage people to act more rationally and prevent them from “overreacting” towards pessimism or optimism. A good forecast ensures that the company accepts rational decisions regarding the goods or services produced by the company. The lack of a forecast forces the company's management to take unnecessary precautions.

Forecasting methods usually require a large investment of time and money. However, a businessman needs methods that do not require complex reasoning in everyday work and can be presented in the form of programs. It is necessary to find forecasting methods without detailed individual analysis. In addition, it is desirable that the knowledge of the market situation, which is possessed by people who constantly work in it, be used in such models.

Since forecasting is a difficult problem, it is obvious that a firm must have several series of forecasts other than a simple descriptive forecast. This will help you take more decisive actions, which will result in increased profits, increased efficiency of the organization and growth in its prestige.

The input data for making a forecast using time series is usually the results of sample observations of variables - either intensity (for example, demand for a product) or state (for example, price). Decisions that must be made at the moment will have an impact in the future after a certain period of time, the value of which can be predictable.

Time series are data ordered in time. Accordingly, we will henceforth denote the time period by t, and the corresponding data value by y(t). Note that the members of a time series are either amounts or numerical information obtained at a certain point in time. For example, the sum of weekly sales for a store obtained at the end of each week during the year forms a time series.

Trend means the general direction and dynamics of a time series. This definition places emphasis on the concept of “overall direction” because the underlying trend must be distinguished from short-term fluctuations, which are cyclical and seasonal fluctuations. Examples of cyclical fluctuations: prices for industrial raw materials, stock prices, sales volumes in wholesale and retail trade, etc. Seasonal fluctuations occur in time series describing sales, production, employment, etc. Important role seasonal fluctuations are influenced by weather conditions, fashion, style, etc. We especially note that irregular or random fluctuations in time series do not obey any pattern and there is no theory that can predict their behavior.

From the point of view of making the right decision by the company's management, the inclusion of periodic (cyclical and seasonal) fluctuations in the overall model can improve the efficiency of the forecast and allow one to predict the expected high and low values ​​of the forecast variables. It must be borne in mind, however, that “business” or economic cycles cannot be reproduced with such precision that it is practical to draw conclusions about future booms and busts based on an analysis of the past.

The work presents linear, cyclical and “exponential” trends. A few words about the exponential trend. Analysis of the life cycle of goods, services, innovations and reflections on the processes occurring around us has shown that the model of the development and death of biological systems is an effective tool for studying many phenomena in business. Moreover, as in business, the indicators of the functioning of a biological system over time are not linear at all stages of its development. The life cycles mentioned above were simulated and their time elasticity was found to be a linear function. The coefficients of this function make it possible to take into account not only the nonlinear mechanisms of life cycles, but also to predict their occurrence. As a result, we got a trend that we called “exponential” because it includes a time exponential.

Consider the time series y(1), y(2),...(y(i),...y(T). It is required to represent the function for which this series is given by a trigonometric polynomial. The periodic components of the polynomial are unknown. The advantage of this model is that it ensures forecast stability by enumerating frequencies.Coefficients are calculated using the entire data set.

In practice, such a model turns out to be difficult for the user. Therefore, a computer program was developed. Checking for consistency with the background is carried out using the least squares method (see: Taha A. Operations Research. M.: Williams, 2005). In many cases, changes in the process under study can be anticipated in advance and included in the presented forecast model. After all, experienced managers can predict the nature of changes. The program includes the coordination of trends through the optimal choice of frequencies in the presented series. To adjust the forecast, you can vary not only the trends, but also take into account the results of the subjective forecast.

We will look for a trend in the form: Y(t) = C + Asin(wt) + Bcos(wt).

Since the values ​​of this function at points 1, 2, ... T are known, we obtain a system of T linear equations for the coefficients A, B, C, w is the parameter.

We solve this system using the least squares method (T>3) and obtain the values ​​of the coefficients A, B, C, depending on w. It is necessary to select the w values ​​in such a way that the trend values the best way would approach the values ​​of the time series. Optimization is carried out using the method of successive approximations. The initial value of w, which is the beginning of successive approximations, is found using formulas presented, for example, in the reference book on mathematics by G. Korn, T. Korn, (M.: Nauka, 1989. Ch. 20).

We subtract from the actual (i.e. initially specified in the form of terms of a time series) values ​​y(1), y(2),...y(i),...y(t) the found theoretical values ​​y(t) at times t =1, 2,...,i,...T. For the obtained data (considering them actual, i.e., members of a time series), we repeat the above procedure.

The forecast accuracy is 1-3%, sometimes fluctuating up to 5-10%. It all depends on the presence of noise, which can significantly affect the forecast. If the retrospective series is large, then the program clearly identifies the regular components of the process. With a small retrospective time series (up to 5-8 values), you need to use exponential smoothing. The exponential smoothing method is based on a moving average. But it overcomes the disadvantage of the moving average method, which is that all data used to calculate the average is weighted equally. In particular, the exponential smoothing method assigns more weight to the most recent observation. It, like the method presented in this paper, is especially effective in forecasting time series with cyclical fluctuations without strong random fluctuations (see: Taha A. Operations Research).

Let's give an example of calculating the projected sales volume (Tables 1, 2).

Table 1. Initial data

Table 2. Calculating a forecast using a sinusoidal trend

The calculation results are presented in the form of graphs in Figure 1 (theoretical function - black line, initial data - black, trend - gray).

Rice. 1. Calculation of projected sales volume using a sinusoidal trend

Let's give an example of using an exponential trend to calculate a sales forecast.

This example examines the change in sales volume during and after advertising campaign(Table 3, 4).

Table 3. Initial data

Table 4. Calculating a forecast using exponential trend

The calculation results are presented in the form of graphs in Figure 2 (theoretical function - gray dash, initial data - black, trend - gray).

Rice. 2. Calculation of predicted sales volume using an exponential trend

The software product we have developed, adapted to work in specific conditions, is universal, reliable and resistant to changing conditions. In addition, and this is significant, you can increase the number of tasks to be solved. So, for example, when forecasting sales volumes, you can solve the problem of the influence of each indicator (advertising, exhibitions, the Internet) on the amount of profit.

One of the advantages of the project is its low cost. Therefore, you can compare the results obtained with those obtained by other methods. Their difference will give management reason to conduct more in-depth research.

The program is easy to use; just enter the necessary data from the information field into the program. The only difficulty may be in obtaining personal data. Difficulties arise when creating the information field in which to work.

It all depends on the conditions under which the data must be obtained (field or laboratory). The ability of experts to build a quasi-information field simplifies the work at the preliminary stage of research, but in this case the “field” flavor of the project is lost.

The value of the project is also in the mobility of solving assigned tasks, quick response to environmental changes, easy correction of changes and additions when working on a specific task.

Posted on http://www.allbest.ru/

Plan

Introduction

1. Essence and classification of methods of economic and mathematical forecasting

1.1 Basic methods of economic and mathematical forecasting

1.2 Basic ideas of technology for scenario expert forecasts

2. Application information technologies in economic and mathematical forecasting

Conclusion

List of used literature

Introduction

The economic system in our country, which had developed by the end of the 80s, was characterized by relatively high material and capital intensity of production, low rates of development of scientific and technological progress, and significant economic imbalance. The problems that arose related to low labor productivity, technical and technological backwardness, environmental degradation, low levels of industrial output and structural imbalances had to be solved by economic reforms.

Over the course of several years of economic reforms, it was possible to solve only a number of tactical problems, in particular, to achieve an improvement in the relationship between the population’s monetary demand and the supply of consumer goods. But this was achieved not by increasing the output of the latter, but due to a decrease in real incomes of the main part of the population.

The current socio-economic situation of the Russian Federation is characterized by an acute structural crisis, which has caused a sharp drop in living standards. This crisis is expressed, among other things, in a decrease in the output of industrial and consumer goods, and in a number of cases, in the cessation of production and economic activities of industrial enterprises. As a consequence of this situation, there is a reduction in spending on social needs. Another important aspect of the crisis situation is the loss of not only international, but also domestic regional markets for products for domestic producers.

The fall in domestic production, naturally, predetermines the need for widespread imports of industrial goods, and especially consumer goods, in particular such an important item as food. In turn, expanding import volumes requires stimulating exports to purchase foreign currency. But since domestic products currently have no access to international markets (for various reasons - low quality, lack of competitiveness, etc.), raw materials are exported - oil, gas, ores, wood, which has an extremely negative impact on the general state of the economy countries.

The problems that have arisen cannot be solved even if the rate of inflation slows down. Moreover, small investments in many sectors of industrial production are absolutely ineffective in the absence of clear, real planning and forecasting of economic processes.

The effectiveness of economic research and forecasts today largely depends on how fully and accurately they reflect the characteristic features of economic processes. At the same time, the most significant impact on the reliability and validity of research is exerted by indicators that characterize the increase in complexity, speed, uncertainty and the possible number of alternatives for the implementation of economic processes.

At the present stage, it is necessary to prepare and make management decisions in conditions of a high degree of dynamic change in economic processes, their sharply increased complexity, non-determinism and non-linearity. At the same time, when developing forecast options for the development of economic processes, it is necessary to take into account the complexity, consistency, multifactorial and multivariate nature of their further development.

The purpose of the work is to study the essence, classification and tools of economic and mathematical forecasting methods.

1) study the essence and classification of methods of economic and mathematical forecasting

2) consider the use of information technology in economic and mathematical forecasting

1. Essence and classification of methodseconomic-mathematicalforecasting

1.1 Basic methods of economic and mathematical forecasting

Let us briefly consider the various forecasting methods (prediction, extrapolation) used in the socio-economic field. There are a large number of publications on forecasting issues. As part of econometrics, there is a scientific and academic discipline"Mathematical methods of forecasting." Its goal is the development, study and application of modern mathematical methods of econometric (in particular, statistical, expert, combined) forecasting of socio-economic phenomena and processes, and the methods must be developed to a level that allows them to be used in the practical activities of an economist, engineer and manager.

The main objectives of this discipline include the development, study and application of modern mathematical and statistical forecasting methods (including non-parametric least squares methods with assessment of forecast accuracy, adaptive methods, autoregression methods, etc.), development of the theory and practice of expert forecasting methods, including including methods for analyzing expert assessments based on statistics of non-numerical data, forecasting methods under risk conditions and combined forecasting methods using jointly economic-mathematical and econometric (both statistical and expert) models. The theoretical basis of forecasting methods are mathematical disciplines (primarily probability theory and mathematical statistics, discrete mathematics, operations research), as well as economic theory, economic statistics, management, sociology, political science and other socio-economic sciences.

As has been generally accepted since the time of the founder of scientific management, Henri Fayol, forecasting and planning are the basis of a manager’s work. The essence of econometric forecasting is the description and analysis of future development, in contrast to planning, in which future movement is predetermined. For example, the forecaster’s conclusion may be that in an hour we can walk no more than 5 km from point A, and the planner’s instruction is that in an hour we need to be at point B. It is clear that if the distance between A and If it is no more than 5 km, then the plan is real (feasible), but if it is more than 10 km, it cannot be implemented under the given conditions. It is necessary either to abandon the unrealistic plan, or to switch to other conditions for its implementation, for example, to move not on foot, but by car. The considered example demonstrates the capabilities and limitations of forecasting methods. Namely, these methods can be successfully applied provided there is some stability in the development of the situation and fail when there are sudden changes.

One of the options for using forecasting methods is to identify the need for changes through “reduction to the absurd.” For example, if the population of the Earth doubles every 50 years, then it is not difficult to calculate how many years will it take for each square meter There will be 10,000 people on the surface of the Earth. From this forecast it follows that patterns of population growth should change.

Taking into account undesirable trends identified during forecasting makes it possible to take the necessary measures to prevent them, and thereby prevent the implementation of the forecast.

There are also self-fulfilling predictions. For example, if an evening television broadcast predicts the imminent bankruptcy of a certain bank, then the next morning many depositors of this bank will want to get their money, a crowd will gather at the entrance to the bank, and banking operations will have to be stopped. Journalists describe this situation with the words: “The bank has burst.” Usually, for this it is enough that at one “wonderful” (for the bank) moment, depositors wish to withdraw a significant share (say, 30%) of funds from deposit accounts.

Forecasting - private view modeling as the basis of cognition and management.

The role of forecasting in the management of a country, industry, region, and enterprise is obvious. It is necessary to take into account STEP factors (social, technological, economic, political), factors of the competitive environment and scientific and technological progress, as well as forecasting the costs and income of enterprises and society as a whole (in accordance with life cycle products - in time and according to the 11 stages of the international standard ISO 9004). The problems of implementation and practical use of mathematical methods of econometric forecasting are associated primarily with the lack of sufficiently extensive experience in such research in our country, since for decades planning has been given priority over forecasting.

Statistical forecasting methods. The simplest methods for restoring dependencies used for forecasting are based on a given time series, i.e. function defined at a finite number of points on the time axis. In this case, the time series is often considered within the framework of a probabilistic model; other factors (independent variables) are introduced besides time, for example, the volume of money supply (M2 aggregate). A time series can be multidimensional, i.e. the number of responses (dependent variables) may be more than one. The main tasks to be solved are interpolation and extrapolation. The least squares method in the simplest case (linear function of one factor) was developed by K. Gauss more than two centuries ago, in 1794-1795. Pre-transformations of variables may be useful.

Experience in forecasting the inflation index and the cost of the consumer basket has been accumulated at the Institute of High Statistical Technologies and Econometrics. In this case, it turned out to be useful to transform (logarithm) the variable - the current inflation index. It is characteristic that, under stable conditions, the forecasting accuracy turned out to be quite satisfactory - 10-15%. However, the significant increase in the price level predicted for the fall of 1996 did not materialize. The fact is that the country's leadership has switched to a strategy of curbing the growth of consumer prices through massive non-payment of wages and pensions. Conditions changed - and the statistical forecast turned out to be unusable. The influence of the decisions of the Moscow leadership was also manifested in the fact that in November 1995 (before parliamentary elections) prices in Moscow fell by an average of 9.5%, although November is usually characterized by a faster rise in prices than in other months of the year, except December and January.

The most commonly used method is least squares with several factors. The least modulus method and other extrapolation methods are used less frequently, although their statistical properties are often better. Tradition and the general low level of knowledge about econometric forecasting methods play a big role.

Assessing the accuracy of the forecast is a necessary part of the qualified forecasting procedure. In this case, probabilistic-statistical models of dependence recovery are usually used, for example, they build the best forecast using the maximum likelihood method. Parametric (usually based on the normal error model) and non-parametric estimates of forecast accuracy and confidence limits for it (based on the Central Limit Theorem of probability theory) have been developed. Thus, we have proposed and studied methods for confidently estimating the point of overlap (meeting) of two time series and their application to assess the dynamics of the technical level of our own products and competitors’ products presented on the world market.

Heuristic techniques that are not based on any theory are also used: the moving average method, the exponential smoothing method.

Adaptive forecasting methods allow you to quickly adjust forecasts when new points appear. We are talking about adaptive methods for estimating model parameters and adaptive methods for nonparametric estimation. Note that with the development of computer computing power, the problem of reducing the volume of calculations loses its significance.

Multivariate regression, including the use of nonparametric estimates of distribution density, is currently the main econometric forecasting tool. We emphasize that it is not necessary to use an unrealistic assumption about the normality of measurement errors and deviations from the regression line (surface). However, to abandon the assumption of normality, it is necessary to rely on a different mathematical apparatus, based on the multidimensional central limit theorem of probability theory and econometric linearization technology. It allows you to carry out point and interval estimation of parameters, check the significance of their difference from 0 in a nonparametric formulation, and build confidence limits for the forecast.

The problem of checking the adequacy of the model, as well as the problem of selecting factors, is very important. The fact is that the a priori list of factors influencing the response is usually very extensive; it is desirable to reduce it, and a large area of ​​modern econometric research is devoted to methods for selecting an “informative set of attributes.” However, this problem has not yet been completely resolved. Unusual effects appear. Thus, it has been established that commonly used estimates of the degree of a polynomial have a geometric distribution. Nonparametric methods for estimating probability density and their application to reconstruct a regression dependence of an arbitrary type are promising. The most general formulations in this area are obtained using non-numerical data statistics approaches.

Modern statistical forecasting methods also include autoregressive models, the Box-Jenkins model, and systems of econometric equations based on both parametric and nonparametric approaches.

To establish the possibility of using asymptotic results for finite (so-called “small”) sample sizes, computer statistical technologies are useful. They also allow you to build various simulation models. Let us note the usefulness of data reproduction methods (bootstrap methods). Computer-intensive forecasting systems combine various forecasting methods into a single automated forecaster workstation.

Forecasting based on data of a non-numerical nature, in particular, forecasting qualitative characteristics is based on the results of statistics of non-numerical data. Regression analysis based on interval data, including, in particular, the determination and calculation of notes and rational sample size, as well as regression analysis of fuzzy data, seems very promising for forecasting. The general formulation of regression analysis within the framework of statistics of non-numerical data and its special cases - analysis of variance and discriminant analysis (supervised pattern recognition), giving a unified approach to formally different methods, is useful in the software implementation of modern statistical forecasting methods.

Expert forecasting methods. The need and general understanding of the use of expert forecasting methods when making decisions at various levels of management - at the level of the country, industry, region, enterprise. Let us note the great practical importance of examinations when comparing and selecting investment and innovation projects, in project management, and environmental assessments. The roles of decision makers (DMs) and specialists (experts) in decision-making procedures, decision-making criteria and the place of expert assessments in decision-making procedures are discussed above. As examples of specific expert procedures widely used in forecasting, we point out the Delphi method and the scenario method. On their basis, specific procedures for preparing and making decisions are formed using expert assessment methods, for example, procedures for distributing funding for research work (based on scoring or paired comparisons), technical and economic analysis, desk marketing research (as opposed to “field” sample research ), evaluation, comparison and selection investment projects.

In relation to forecasting tasks, let us recall some aspects of planning and organizing expert research. A working group and an expert commission should be formed. Very important stages are the formation of the goals of expert research (collection of information for the decision-maker and/or preparation of a draft decision for the decision-maker, etc.) and the formation of the composition of the expert commission (methods of lists (registries), “snowball”, self-assessment, mutual assessment) with a preliminary solution to the problem a priori preferences of experts. Various options for organizing expert research, differing in the number of rounds (one, several, not fixed), the order of involving experts (simultaneously, sequentially), the method of taking into account opinions (with scales, without scales), the organization of expert communication (without communication, correspondence, face-to-face with restrictions (“brainstorming”) or without restrictions) allow us to take into account the specifics of a particular expert study. Computer support for the activities of experts and the Working Group, economic issues of conducting expert research are important for the successful conduct of expert research.

Expert assessments can be obtained in various mathematical forms. The most commonly used are quantitative or qualitative (ordinal, nominal) features, binary relations (rankings, partitions, tolerances), intervals, fuzzy sets, results of paired comparisons, texts, etc. Basic concepts of (representational) measurement theory: basic types of scales, permissible transformations , adequate conclusions, etc. are important in relation to expert assessment. It is necessary to use average values ​​corresponding to the main measurement scales. In relation to various types of ratings, the representative theory of measurements makes it possible to determine the degree of their adequacy to the forecasting situation and to suggest the most useful ones for forecasting purposes.

For example, an analysis of the ratings of politicians by the degree of their influence, published by one of the well-known central newspapers, showed that due to the inadequacy of the mathematical apparatus used, only the first 10 places may have some relation to reality (they do not change when switching to another method of data analysis , i.e. do not depend on the subjectivity of the members of the Working Group), the rest are “information noise”, attempts to rely on them in predictive analysis can only lead to errors. As for the initial part of the rating of this newspaper, it can also be questioned, but for deeper reasons, for example, related to the composition of the expert commission.

The main procedures for processing predictive expert assessments are consistency checking, cluster analysis and finding a group opinion.

Checking the consistency of expert opinions expressed by rankings is carried out using the Kendall and Spearman rank correlation coefficients and the Kendall and Babington Smith rank concordance coefficient. Parametric models of paired comparisons are used - Thurstone, Bradley-Terry-Luce - and non-parametric models of the theory of Lucians (about Lucians).

In the absence of consistency, expert opinions are divided into groups that are similar to each other using the nearest neighbor method or other methods of cluster analysis (automatic construction of classifications, unsupervised pattern recognition). The classification of Lucians is carried out on the basis of a probabilistic-statistical model.

They use various methods to construct the final opinion of the expert commission. The method of average ranks stands out for its simplicity. Computer modeling made it possible to establish a number of properties of the Kemeny median, which is often recommended for use as the final (generalized, average) opinion of a commission of experts. The interpretation of the law of large numbers for non-numeric data in terms of the theory of expert survey is as follows: the final opinion is stable, i.e. changes little when the composition of the expert commission changes, and with an increase in the number of experts it approaches the “truth”. Moreover, in accordance with the adopted approach, it is assumed that the experts’ answers can be considered as measurement results with errors, all of them are independent identically distributed random elements, the probability of acceptance certain value decreases with distance from some center - “truth”, and total number experts are quite large.

Problems of applying forecasting methods under risk conditions. There are numerous examples of situations related to social, technological, economic, political, environmental and other risks. It is in such situations that forecasting is usually necessary. There are various types of criteria used in the theory of decision making under conditions of uncertainty (risk). Due to the inconsistency of decisions obtained according to various criteria, the need to use expert assessments is obvious.

In specific forecasting tasks, it is necessary to classify risks, set the task of assessing a specific risk, carry out risk structuring, in particular, build cause trees (other terminology, failure trees) and consequence trees (event trees). The central task is the construction of group and generalized indicators, for example, indicators of competitiveness and quality. Risks must be taken into account when forecasting the economic consequences of decisions made, consumer behavior and the competitive environment, foreign economic conditions and macroeconomic development of Russia, the ecological state of the environment, the safety of technologies, and the environmental hazards of industrial and other facilities. The scenario method is indispensable for analyzing the technical, economic and social consequences of accidents.

There is some specificity in the application of forecasting methods in situations involving risk. The role of the loss function and methods for its evaluation, including in economic terms, is great. In specific areas, probabilistic safety analysis (for nuclear energy) and other special methods are used.

Modern computer forecasting technologies. Interactive forecasting methods using econometric databases, simulation (including those based on the Monte Carlo method, i.e. statistical testing method) and economic and mathematical dynamic models combining expert, statistical and modeling blocks are promising. Let us pay attention to the similarities and differences between expert assessment methods and expert systems. We can say that an expert system models the behavior of an expert by formalizing his knowledge using special technology. But the intuition of a “living expert” cannot be put into a computer, and when formalizing the expert’s opinions (in fact, during his interrogation), along with the clarification of some of his ideas, the coarsening of others occurs. In other words, when using expert assessments, they directly turn to the experience and intuition of highly qualified specialists, and when using expert systems, they deal with computer algorithms for calculations and conclusions, the creation of which once upon a time experts were involved as a source of data and standard conclusions.

Let us draw attention to the possibility of using in forecasting production functions that statistically describe the relationship between output and factors of production, on various ways taking into account scientific and technological progress, in particular, based on trend analysis and with the help of expert identification of growth points. Examples of economic forecasts of all types are available in the literature. To date, computer systems and software for combined forecasting methods have been developed.

economic mathematical forecast information

1. 2 Basic ideas of technology for scenario expert forecasts

As already noted, socio-economic forecasting, like any forecasting in general, can be successful only under some stability of conditions. However, decisions of authorities, individuals, and other events change conditions, and events develop differently than previously expected. Objectively, there are choice points (furcations), after which the development considered by forecasters can take one of several possible paths (these paths are usually called scenarios). The choice can be made at different levels - by a specific person (move to another job or stay), a manager (produce this or that brand of product), competitors (cooperation or struggle), government structures (choice of a particular taxation system), the population of the country (choice president), the “international community” (whether to impose sanctions against Russia or not).

Let's look at an example. It is quite obvious that after the first round presidential elections 1996, the further development of socio-economic events could only be discussed in terms of scenarios: if B.N. wins. Yeltsin, then such and such will happen if G.A. wins. Zyuganov, then events will go this way and that way.

For example, the work aimed to forecast the dynamics of gross domestic product (GDP) for 9 years (1999-2007). When it was carried out, it was clear that during this time various political events would take place, in particular, at least two cycles of parliamentary and presidential elections (provided the current political structure), the results of which cannot be predicted unambiguously. Therefore, the forecast of GDP dynamics could only be made separately for each scenario from a certain range covering possible ways socio-economic dynamics of Russia.

The scenario method is necessary not only in the socio-economic field. For example, when developing methodological, software and information support for risk analysis of chemical technology projects, it is necessary to compile a detailed catalog of accident scenarios associated with leaks of toxic chemicals. Each of these scenarios describes an accident of its own type, with its individual origin, development, technical, economic and social consequences, and prevention capabilities.

Thus, the scenario method is a method of decomposition (dividing into parts) of the forecasting problem, which involves identifying a set of individual options for the development of events (scenarios), collectively covering all possible options development. Moreover, each individual scenario must allow for the possibility of fairly accurate forecasting, and the total number of scenarios must be foreseeable.

The possibility of such a decomposition is not obvious. When applying the scenario method, it is necessary to carry out two stages of research:

Construction of a comprehensive but manageable set of scenarios;

Forecasting within each specific scenario in order to obtain answers to questions of interest to the researcher.

Each of these stages is only partially formalized. A significant part of the reasoning is carried out at a qualitative level, as is customary in the socio-economic and human sciences. One of the reasons is that the desire for excessive formalization and mathematization leads to the artificial introduction of certainty where it essentially does not exist, or to the use of cumbersome mathematical apparatus. Thus, reasoning at the verbal level is considered evidential in most decision-making situations, while an attempt to clarify the meaning of the words used using, for example, the theory of fuzzy sets leads to very cumbersome mathematical models and calculations.

To construct an exhaustive but foreseeable set of scenarios, it is necessary to first analyze the dynamics of socio-economic development of the economic agent in question and its environment. The roots of the future are in the present and the past, and often in the very distant past. In addition to macroeconomic and microeconomic characteristics, known only with errors that cannot be considered random or small, it is necessary to take into account the state and dynamics of domestic mass consciousness, political, including foreign policy realities, since over the usually considered time interval (up to 10 years) the economy often follows politics, not the other way around.

For example, by the beginning of 1985, the USSR economy was in a fairly stable state with annual growth averaging 3-5%. If the leadership of the country were in the hands of other people, then development would continue under the same conditions and by the end of the millennium, the USSR’s GDP would have increased by 50% and would have amounted to approximately 150% of the 1985 level. In reality, due to political reasons, Russia’s GDP during these 15 years fell by about 2 times, i.e. amounted to about 50% compared to 1985, or 3 times less than could be expected from purely economic reasons if the stable conditions of 1985 were maintained.

The set of scenarios should be visible. We have to exclude various unlikely events - the arrival of aliens, the fall of an asteroid, mass epidemics of previously unknown diseases, etc.

The creation of a set of scenarios in itself is the subject of expert research conducted in accordance with the methodology described above. In addition, experts can assess the likelihood of a particular scenario occurring. It is clear that these estimates are not reliable.

A simplified approach to forecasting using the scenario method is often used. Namely, three scenarios are formulated - optimistic, probable and pessimistic. At the same time, for each of the scenarios, the values ​​of the parameters that describe the production and economic situation (in English - case) are chosen quite arbitrarily. The purpose of this approach is to calculate scatter intervals for characteristics and “corridors” for time series of interest to the researcher (and the customer of the study). For example, they predict the financial flow (in English - cash flow) and the net present value (in English - net present value or NPV) of an investment project.

It is clear that such a simplified approach cannot give the maximum or minimum value characteristics, it only gives an idea of ​​the order of the quantitative measure of dispersion. However, its development leads to a Bayesian formulation in decision theory. For example, if a scenario is described by an element of a finite-dimensional Euclidean space, then any probability distribution on a set of initial parameters is transformed into a distribution of characteristics of interest to the researcher. Calculations can be carried out using modern information technology and statistical testing methods. It is necessary, in accordance with a given distribution on a set of parameters, to select a specific vector of parameters using a pseudorandom number sensor and calculate the final characteristics for it. The result will be an empirical distribution on a set of final characteristics, which can be analyzed in different ways and an estimate can be found mathematical expectation, scatter, etc. It remains unclear how to define the distribution on a set of parameters. Naturally, you can use experts for this.

Forecasting within each specific scenario in order to obtain answers to questions of interest to the researcher is also carried out in accordance with the forecasting methodology described above. Under stable conditions, statistical methods for time series forecasting can be applied. However, this is usually preceded by analysis with the help of experts, and often forecasting at the verbal level is sufficient (to obtain conclusions of interest to the researcher and decision maker) and does not require quantitative clarification.

The question of using forecasting results does not relate to econometrics, but to a related science - decision theory. As is known, when making decisions based on an analysis of the situation, including the results of forecasting studies, one can proceed from various criteria. So, you can focus on the fact that the situation will turn out in the worst, or best, or average (in some sense) way. You can try to outline measures that provide the minimum acceptable useful results in any scenario, etc.

So, the concept of a modern methodology for expert assessment using the scenario method is considered. It was used, for example, to predict the socio-economic development of Russia.

2. Application of information technologies in economics and mathematicsOforecasting

Before the advent of modern IT, there were no broad opportunities to use effective economic and mathematical models directly in the process of economic activity. In addition, the use of existing forecasting models for analytical purposes did not place such high demands on their information support.

Fundamentals of Forecasting Technologies

When building a predictive system from scratch, it is necessary to resolve a number of organizational and methodological issues. The first include:

Training users in methods of analysis and interpretation of forecast results;

Determining the directions of movement of forecast information within the enterprise, at the level of its divisions and individual employees, as well as the structure of communications with business partners and authorities;

Determining the timing and frequency of forecasting procedures;

Development of principles for linking the forecast with long-term planning and the procedure for selecting options for the results obtained when drawing up an enterprise development plan.

The methodological problems of constructing a forecasting subsystem are:

Development of the internal structure and mechanism of its functioning;

Organization of information support;

Development of mathematical software.

The first problem is the most difficult, since to solve it it is necessary to build a set of forecasting models, the scope of which is a system of interrelated indicators. The problem of systematization and evaluation of forecasting methods appears here as one of the central ones, since for choosing specific method it is necessary to conduct a comparative analysis of them. A variant of the classification of forecasting methods, taking into account the characteristics of the knowledge system that underlies each group, can be presented in aggregate as follows: methods of expert assessments; logical modeling methods; mathematical methods.

Each group is suitable for solving a certain range of problems. Therefore, practice puts forward the following requirements for the methods used: they must be focused on a specific forecasting object, must be based on a quantitative measure of adequacy, and be differentiated by the accuracy of estimates and the forecast horizon.

The main tasks that arise in the process of creating a predictive system are divided into:

Construction of a system of predicted processes and indicators;

Development of an apparatus for economic and mathematical analysis of predicted processes and indicators;

Specifying the method of expert assessments, identifying indicators for examination and obtaining expert assessments of some predicted processes and indicators;

Forecasting indicators and processes indicating confidence intervals and accuracies;

Development of methods for interpreting and analyzing the results obtained.

Work on information and mathematical support for the forecasting system deserves special attention. The process of creating software can be represented in the following stages:

Development of a methodology for structural identification of a forecast object;

Development of methods for parametric identification of a forecast object;

Development of methods for forecasting trends;

Development of methods for predicting the harmonic components of processes;

Development of methods for assessing the characteristics of random components of processes;

Creation of complex models for predicting indicators that form an interconnected system.

The creation of a forecasting system requires an integrated approach to solving the problem of its information support, which is usually understood as a set of initial data used to obtain forecasts, as well as methods, methods and means that ensure the collection, accumulation, storage, retrieval and transmission of data during the operation of the forecasting system and its interaction with other enterprise management systems.

System information support usually includes:

Information fund (database);

Sources of formation of the information fund, flows and methods of data receipt;

Methods of accumulation, storage, updating and retrieval of data forming an information fund;

Methods, principles and rules for data circulation in the system;

Methods for ensuring the reliability of data at all stages of their collection and processing;

Methods of information analysis and synthesis;

Methods for an unambiguous formalized description of economic data.

Thus, the following main components are required to implement the forecasting process:

Sources of internal information, which is based on management and accounting systems;

Sources of external information;

Specialized software that implements forecasting algorithms and analyzes results.

Considering the importance of solving the forecasting problem for market entities, it is advisable to check the quality of the proposed methods and algorithms, as well as technologies in general, using specially selected (test) source data. A similar verification method has been used for quite a long time when assessing the adequacy of mathematical tools designed for nonlinear optimization, for example, using the Rosenbrock and Powell functions.

Confirmation (or verification) of the quality and performance of forecasting technology is usually carried out by comparing a priori known model data with their predicted values ​​and assessing the statistical characteristics of forecast accuracy. Let's consider this technique in a situation where process models are an additive combination of trend Tt, seasonal (harmonic) and random components.

Currently, a variety of software tools have become widespread, providing, to one degree or another, the collection and analytical processing of information. Some of them, for example MS Excel, are equipped with built-in statistical functions and programming tools. Others, especially inexpensive accounting and management accounting programs, do not have such capabilities or the analytical capabilities are insufficiently implemented in them, and sometimes incorrectly. However, this is, unfortunately, also inherent in some more powerful and multifunctional enterprise management systems. This situation is explained, apparently, by a shallow analysis on the part of the developers of the properties of the forecasting algorithms they have chosen and their uncritical application. For example, judging by the available sources, zero-order exponential smoothing is often used as the basis for predictive algorithms. However, this approach is valid only in the absence of a trend in the process being studied. In fact, economic processes are non-stationary, and forecasting involves the use of more complex models than models with a constant trend.

From the perspective of the topic under consideration, it is interesting to trace the path of development of domestic automated banking systems. Early banking systems were based on rigid technology, constantly requiring changes or additional software. This prompted financial software developers, following the principles of openness, scalability and flexibility, to use industrial DBMSs. However, these DBMSs themselves turned out to be unsuitable for solving high-level analytical problems, which include the problem of forecasting. To do this, it was necessary to use additional technologies for data storage and operational analytical processing, which ensured the operation of decision support systems for financial institutions and the preparation of forecasts. The same approach is used in complex systems enterprise management.

Another direction of modern applied use of IT-based forecasting methods is the solution of a wide range of marketing problems. An illustration is the SAS Churn Management Solution for Telecommunications software. It is intended for telecommunications operators and allows, as its developers claim, to build predictive models and, with their help, assess the likelihood of churn of certain categories of customers. The basis of this software is the distributed database server Scalable Performance Data Server, tools for building and administering warehouses and data marts, data mining tools Enterprise Miner, decision support system SAS/MDDB Server, as well as auxiliary tools.

To ensure the competitiveness of new-fangled CRM systems, the list of their expanded capabilities, as well as for automated banking systems, includes reporting functions that use OLAP technologies and allow, to a certain extent, forecasting the results of marketing, sales and customer service.

There are quite a lot of specialized software products that provide statistical processing numerical data, including individual elements of forecasting. Such products include SPSS, Statistica, etc. These tools have both advantages and disadvantages that significantly limit the scope of their practical application. It should be noted here that assessing the suitability of specialized mathematical and statistical software tools for solving forecasting problems by ordinary users without special training requires separate serious research and discussion.

However, solving forecasting problems for small and medium-sized business consumers using powerful and expensive information systems and technology is almost impossible, primarily for financial reasons. Therefore very promising direction is to develop the analytical capabilities of existing and widespread low-cost accounting and management accounting systems. Developed additional reports, based on specific business processes and containing the necessary analytical information for a specific user, have a high efficiency-cost ratio.

Some software developers create entire lines of analytical tools. For example, the Parus Corporation offers the Parus-Analytics and Triumph-Analytics solutions for a wide range of users from small and medium-sized businesses. More complex tasks analytical processing of forecast information are integrated into the Parus system in the form of a so-called situational center. According to Dmitry Sudarev, manager for the development of circulation solutions, a decision was made to develop and implement software products that allow us to move from simply recording facts in the enterprise’s activities to analyzing information. At the same time, a transition was planned from automating the work of accountants and middle managers to processing information for top management. Taking into account the possible range of consumers, Parus-Analitika and Triumph-Analytika do not have any special requirements for the software and hardware environment, however, the Triumph-Analytika solution is implemented on the basis of MS SQL Server, which provides it with greater capabilities for predicting the processes under study , in particular, the harmonic component of the forecasts is taken into account.

The value of a forecast increases many times over when it is directly used in enterprise management. Therefore, an important area is the integration of forecasting systems with systems such as Kasatka, MS Project Expert, etc. For example, the Kasatka software from SBI is positioned as an automated workstation for the manager and specialists of the marketing department and is intended for the development of management, marketing and strategic planning. This purpose predetermines the need to identify long-term trends and take them into account when planning. The forecasting horizon is determined based on the relevant goals of the organization.

Conclusion

Thus, to date, quite a lot of research has been carried out and impressive practical solutions to the problem of forecasting in science, technology, economics, demography and other fields have been obtained. Attention to this problem is due, among other things, to the scale of the modern economy, the needs of production, the dynamics of social development, the need to improve planning at all levels of management, as well as accumulated experience. Forecasting is one of the decisive elements of the effective organization of management of individual economic entities and economic communities due to the fact that the quality of decisions made is largely determined by the quality of forecasting their consequences. Therefore, decisions made today must be based on reliable assessments of the possible development of the phenomena and events being studied in the future.

Many experts see improvement in forecasting in the development of appropriate information technologies. The need for their use is due to a number of reasons, including: growth in the volume of information; complexity of algorithms for calculating and interpreting results; high requirements for the quality of forecasts; the need to use forecasting results to solve planning and management problems.

From time to time, information appears about positive results achieved by one or another company. A number of publications note that successful assessment of trends in the market situation, demand for goods or services, as well as other economic processes and characteristics allows one to obtain a significant increase in profits and improve other economic indicators. The mechanism of success at first glance is simple and clear: by anticipating what will happen in the future, you can take timely action effective measures using positive trends and compensating for negative processes and phenomena.

Accuracy, reliability and efficiency, however, like other components of forecasting quality, are ensured by a number of factors, among which it is necessary to highlight: software, which is based on economic and mathematical models adequate to reality; n completeness of coverage and reliability of sources of initial information on which it is based operation of forecasting algorithms; efficiency of processing internal and external information; ability to critically analyze forecast estimates; timeliness of making necessary changes in methodological and information support for forecasting.

Special software is based on carefully selected models, methods and techniques. Their implementation is extremely important for obtaining high-quality forecasts when solving problems of current and strategic planning. An analysis of the current situation shows that the difficulties in introducing IT, which provides forecasting of economic processes, are not only technical or methodological, but also organizational and psychological in nature. Consumers of the results sometimes do not understand the principles of the models used, their formalization and objectively existing limitations. This, as a rule, gives rise to distrust in the results obtained. Another group of implementation problems is associated with the fact that predictive models are often closed, autonomous in nature and therefore their generalization for the purpose of development and mutual adaptation is difficult. Therefore, a step-by-step approach highlighting the main analytical tasks may be a compromise solution.

However, there are practically no ready-made replicable or corporate solutions that provide forecasting for small and medium-sized economic entities at the system level with high quality and affordable prices. Currently, automated enterprise management systems are limited mainly to elementary accounting and control tasks.

List of used literature

1. Ayvazyan S.A. Fundamentals of econometrics. M.: UNITY, 2011. - 432 p.

2. Arzhenovsky S.V., Fedosova O.N. Econometrics. Rostov-on-Don: RGEU, 2012. - 202 p.

3. Borodich S.A. Econometrics. Mn.: New knowledge, 2015. - 408 p.

4. Vladimirova L.P. Forecasting and planning in market conditions. M.: Dashkov and K., 2013. -- 308 p.

5. Dougherty K. Introduction to econometrics. / Translated from English. - M.: Infra-M, 2011. - 402 p.

6. Ezhemanskaya S.N. Econometrics. Rostov-on-Don: Phoenix, 2013. - 160 p.

7. Kremer N.Sh., Putko B.A. Econometrics. M.: UNITY, 2015. - 311 p.

8. Magnus Y.R., Katyshev P.K., Peresetsky A.A. Econometrics: an initial course. M.: Delo, 2011. - 400 p.

9. Novikov A.I. Econometrics. M.: Infra-M, 2013. - 306 p.

10. Orlov A.I. Econometrics. M.: Exam, 2014. - 576 p.

11. Tikhomirov N.P., Dorokhina E.Yu. Econometrics. M.: Exam, 2013. - 512 p.

12. Econometrics. / Ed. I.I. Eliseeva. - M.: Finance and Statistics, 2012. - 344 p.

Posted on Allbest.ru

Mathematician Konstantin Vorontsov on the use of machine learning problems in business, compositions of adaptive models and improving data quality

Ten years ago, one large retail chain announced a tender to solve the problem of forecasting sales volumes in its network. Almost all large retailers solve forecasting problems because they need it to plan purchases. Competition conditions were set up as follows: we were given data for two years - these are daily sales of approximately 12,000 goods in one of the chain’s stores, the tender was closed, besides us, six more companies were invited to it. Among them were very large vendors of analytical solutions for retail. We, of course, assessed our chances of winning this tender as small.

The condition was to create a sales forecast for the two weeks that immediately followed the two years for which data was available. The organizers of the competition offered their own quality functionality, which was used to measure the quality of forecasts. This functionality was a little non-standard. The organizers decided to take into account that this functionality includes a large number of goods and it’s not good when you add up pieces with kilograms, so it was the sum of all goods, and they had to put the predicted value itself as the denominator. This was not a very clear move; they don’t usually do that. We warned the organizers of the competition that the functionality was a little strange, other participants in the competition also warned them about this, but nevertheless, this decision also had its own logic, and the competition took place under such conditions.

Typically, the forecast of consumer demand - more precisely, sales volumes - is made using forecasting methods that have been known in statistics for a very long time. In general, they are based on the least squares method, where the functionality includes sums by product, sums by time points, and the square of the difference between the algorithm’s forecast and the actual sales volume for this product on that day. This is how the functionality is usually arranged, and in all standard solutions, minimizing such functionality allows you to customize the forecasting algorithm.

There are many simple, fast-working methods, also known for a long time, since the 1960s, that we began to use in order to solve the forecasting problem. These are the exponential moving average methods, the Brown, Theil-Wage, Holt-Winters models, and so on. Some of them take into account seasonality. Seasonality should not be understood as winter - summer, but rather as weekdays - weekends, that is, weekly seasonality. Many items actually sell differently on weekdays and weekends. We immediately realized that our major competitors in this tender would use standard approaches: they would use the least squares method, because they have ready-made solutions, and rather labor-intensive computational methods like neural networks or autoregression. And we decided to go the other way and use simple methods with the understanding that each product has many of its own characteristics. There are many models, but it is not known which model will be the best for each product. Moreover, we even assumed that a product switches its model from time to time and may be better predicted by one model at first, and then at some point another model will start to work better. Therefore, we made an adaptive composition of simple adaptive models. At each moment in time, we select the model that has recently worked better, given more accurate forecasts, switch to it, and it is this model that gives the forecasts. The first decision that was made was to use a composition of simple models instead of building something more complex.

The second solution was that we realized that the functional was non-standard, and, as taught in the first year of physics and technology, we took this functional, differentiated it according to the parameters of the model, equated the derivatives to zero and obtained a certain system of equations from which we derived new method. In principle, this is a job for a mathematician for one evening, but we guessed that our competitors would not do this, because they have ready-made solutions and they strongly believe in them. As it turned out, we really made the right decision.

Another feature of this problem is that there were large intervals of non-random lack of demand. Imagine: a product is sold consistently every day, and suddenly you see that for two weeks this product is not available at all. This, of course, is not due to the fact that there is no demand, but to the fact that the goods simply were not delivered, they were not on the shelves, they were not in the warehouse. We simply cut out such intervals of no demand from the training data so that they did not affect the result.

The day came when we showed our solution to the organizers of the competition. We knew that one of our major competitors was speaking in front of us, and when the organizers asked: “How many hours does your model calculate?”, we were surprised and said: “Didn’t you understand that we were just on my laptop in one minute?” and eight seconds not only calculated all the predictions, but also trained our model on a two-year interval?” It was, of course, a shock. As a result, our model turned out to be not only the most accurate, but also the fastest. We showed that all forecasts across the entire network can be read in literally two hours, at night, on an old server, and that you don’t even need to purchase any new equipment.

This is not only a success story, but also a very instructive story: firstly, one should not be afraid to use non-standard methods, and if the problem is posed in a non-standard way, then only a mathematician can quickly find a solution - it’s good when it works out quickly, sometimes it doesn’t work out, of course ; secondly, this incident gave us the strength to enter the market with our own solutions - there is no need to be afraid that there are strong competitors in the market. There was another teachable moment. When I myself was selecting models for this task, we first introduced as many as thirty different models, and from them, as adaptively as I said, every day the optimal model was selected for each product.

In principle, this is fraught with such a phenomenon as overfitting, that is, we could fit the training data well and accurately and predict poorly on the new test data. I knew about this phenomenon that the phenomenon is due to the fact that the model can be overly complex, then the overfitting effect occurs. It seemed to me that choosing from thirty models is not so difficult; there should be no retraining here. My surprise was very strong when I conducted an experiment, compared training with control and realized that overtraining is simply huge and we lose tens of percent of accuracy due to this effect. I was just going to introduce new models more and more, but this experiment showed that the solution must, on the contrary, be simplified and thirty models is a lot. The next shock for me was when it turned out that the optimal number of models was six, that is, it was impossible to build a more complex solution than from six models.

Then, purely theoretically, this problem puzzled me, and a solution was found only when I was working on my doctoral dissertation and had already seriously studied the phenomenon of overfitting within the framework of the combinatorial theory of overfitting. It turned out that if you choose from models and you have one good model, and all the others are bad, then you good model, as a rule, is what you will choose. You won't be retrained, you will have this one thing good decision. If you have many models, but they are similar to each other, you will not overfit either, because the effective complexity of the population of such similar models is low, and overfitting is also low. And if it turns out that your models are significantly different and approximately all are equally bad, then overfitting can be very large, and the effect of overfitting grows monstrously as the number of models grows. This was exactly the situation we encountered in this tender. But it was possible to explain it theoretically only a few years later.

There was another cautionary tale. Then, at this tender, presenting our solution to the organizers of the competition, we explained: “We believe that your functionality is not designed correctly, you cannot do this. The fact that the predicted value is in the denominator is, of course, not good. The fact that your functional expresses the square of the error difference...” What is the square of rubles, for example? It doesn't have economic sense. We proposed to optimize functionalities that express the company’s losses from inaccurate forecasts, and showed how such functionality should be structured, and showed that we are ready to optimize such non-standard functionalities, thereby increasing the company’s profit - exactly what was needed for business. When we actually started working on the project, it turned out that the company had very dirty data that was needed to build such functionality. For some products, such data was not available at all; for some products, this data was inaccurate, because managers were still not interested in having such data checked and controlled. This is not accounting, this is some kind of auxiliary information. Maybe someone will need it someday, maybe not.

As a result, it turned out that the data was dirty, and it was necessary to improve business processes and work to improve data quality. This is something that business did not understand at that moment. When we came up with our solution and realized that the fight for the quality and purity of data is an important part of business, we also helped our partners realize this and improve some things within their business processes. Such an instructive story about the connection between business and science, that science can provide business with non-standard solutions. Sometimes this is not at all difficult, but on the contrary, in the process of searching for these solutions based on real cases, we can get feedback for science, we can encounter some unresolved theoretical questions and move the theory forward.

Doctor of Physical and Mathematical Sciences, Professor, Faculty of Computer Science, National Research University Higher School of Economics

INTRODUCTION

Translated from Greek, the word “forecast” means foresight, a prediction about the development of something, based on certain factual data. In general, a forecast should be understood as a scientifically based judgment about the possible states of an object in the future, about alternative ways and timing of its implementation.

The purpose of forecasting is to create scientific prerequisites, including scientific analysis of economic development trends; variant foresight of the upcoming development of social reproduction, taking into account both existing trends and intended goals; assessment of the possible consequences of decisions made; justification of directions of socio-economic and scientific-technical development for making management decisions.

Forecasts of natural resources characterize the involvement of the latter in economic turnover and cover all types of social reproduction and the natural environment: fuel and mineral resources, resources of the World Ocean, some types of energy, flora and fauna, as well as environmental protection.

MATHEMATICAL METHODS FOR PREDICTION

Mathematical forecasting methods have high reliability of the information obtained. When forecasting, the most widely used methods are mathematical extrapolation, economic-statistical and economic-mathematical modeling.

Mathematical extrapolation methods make it possible to quantitatively characterize the predicted processes. It is based on the study of patterns of development of the phenomenon under study that have developed in the past and their spread to the future. The method is based on the fact that the principle of inertia operates in economic life, i.e. the observed patterns are quite stable over a period of time.

Extrapolation in forecasting is carried out by aligning statistical series without their connection with other series of economic dynamics, the influence of which is taken into account in an average form only on the basis of past experience.

The premise that the conditions of the previous period remain unchanged during extrapolation limits the possibility of using this method to relatively short periods during which no significant qualitative changes occur. The most reliable forecasting results are based on the ratio of the duration of the previous period (retrospection) and the period of anticipation (prospection).

To apply this method, it is necessary to have a long series of indicators over the past period. This information is studied and processed. The actual time series is aligned by graphic-analytical or statistical selection of the approximating function. Next, hypotheses for changes in the object during the forecast period (lead period) are developed and formalized in the form of quantitative indicators (trends). In this case, the values ​​of indicators can be predicted not only at the end of the forecast period, but also at intermediate stages.

Methods and techniques of mathematical statistics and probability theory make it possible to use a wide range of functions to predict the required indicator over time.

These methods have disadvantages, since a reliable forecast cannot be given for a long period if there are abrupt changes in the data; there is no way to determine the qualitative characteristics of the predicted objects.

Mathematical extrapolation methods are used in forecasting land allocations for non-agricultural needs, establishing crop yields, etc.

Economic-statistical models are most often used in forecasting. Based on them, the yield of agricultural crops, the productivity of animals, the yield of products from agricultural lands, and forecast standards (afforestation of the territory, agricultural development of land, etc.) are calculated. This method allows you to scientifically substantiate the indicators and standards used in planning.

An economic-statistical model is a function that connects resultant and factor indicators, expressed in analytical, graphical, tabular or other form, built on the basis of mass data and having statistical reliability. Such functions are called production functions, since they describe the dependence of production results on available factors.

The process of developing an economic-statistical model (modeling) consists of the following stages:

  • 1. Economic analysis of production. Definition of the dependent variable (resultative indicator) and identification of factors influencing it (factorial indicator).
  • 2. Collection of statistical data and their processing.
  • 3. Establishing a mathematical form of connection (type of equation) between effective and factorial indicators.
  • 4. Determination of numerical parameters of the economic-statistical model.
  • 5. Assessment of the degree of compliance of the economic-statistical model with the process being studied.
  • 6. Economic interpretation of the model.

Economic analysis of production consists of determining the goal, objective and choosing an effective indicator that reflects the effectiveness of the forecast solution. When analyzing the intensity of land use in agricultural organizations, the cost of gross output per 100 hectares of agricultural land (arable land), crop yields, land productivity, etc. can be used as an effective indicator.

Soil fertility scores, agricultural development and plowing, energy availability, labor availability, etc. are used as factor indicators.

When choosing independent factors, certain rules are followed:

  • 1. The accuracy of production functions is higher when more empirical data (with large samples).
  • 2. Factors-arguments should have the most significant impact on the process being studied, be quantitatively measured and represented by only one sign.
  • 3. The number of selected factors should not be large, as this complicates the model and increases the complexity of its use.
  • 4. The factors included in the model should not be in a state of functional connection with each other (autocorrelation), since they characterize the same aspect of the phenomenon being studied and duplicate each other. When using them in an economic-statistical model, the studied dependencies and calculation results may be distorted.

The collection of statistical data and their processing is carried out after determining the dependent variable (resultative indicator) and the argument factors. When collecting information, experimental and statistical methods are used. The first involves the study of data obtained from experiments whose conditions can be controlled. But in land management the process of experimentation is difficult, and when solving individual issues it is generally impossible.

The second method is based on the use of statistical data (complete or sample). For example, if, when analyzing the size of land use, data on all agricultural enterprises in the region are used, then the statistical information is continuous, and the population being studied is general.

However, the size of general populations can be too large - several hundred units or more. Therefore, to reduce calculations and save time, the number of observations is reduced by obtaining sample data (forming a sample population) using various methods that allow maintaining the reliability of calculations and extending the research results to the general population.

In all cases, the sample should be homogeneous; exclude anomalous objects and data (very different from all others); include only factors that are uniquely measured by some number or system of numbers.

The determination of the mathematical form of the connection between variables is carried out by logically analyzing the process. Analysis allows you to establish the type of equation (linear, nonlinear), the form of the relationship (paired or multiple), etc.

Determining model parameters includes calculation numerical characteristics mathematical relationship (equation). For example, if to establish the dependence of crop yield (y) on the fertility score night (x) linear dependence form, then this stage of modeling consists of obtaining numerical values ​​of coefficients and.

Various methods can be used to determine the parameters of the equation, but practice shows that the least squares method gives the most accurate results. Assessment of the degree of compliance of the economic-statistical model with the process being studied is carried out using special coefficients (correlation, determination, materiality, etc.). These coefficients show the correspondence of the mathematical expression to the process under study, whether the resulting model can be used for subsequent calculations and making land management decisions, how accurately the effective indicator is determined and with what probability it can be trusted.

The model finds economic application in the scientific substantiation of standards and the economic substantiation of indicators in forecast developments. mathematical extrapolation agricultural

The most common type of economic statistical models are production functions.

A production function is a mathematically expressed dependence of production results on production factors.

Using production functions, forecasting analyzes the state and use of land; prepare initial information for economic and mathematical optimization problems various solutions; establish the level of an effective attribute for the future when planning and forecasting the use of land in land management schemes and projects; determine economic optimums, coefficients of elasticity, efficiency and interchangeability of factors. To express dependencies in forecasting, the linear dependence is most often used, since it is easy to use. Less commonly used are power, hyperbolic, polynomial and others.

Economic-mathematical modeling involves the creation of a model that studies an economic object and represents its description using signs and symbols (mathematical equations and inequalities, matrices, formulas, etc.).

The solution to any economic and mathematical problem in planning and forecasting in land management is associated with a large amount of information. To model, it is necessary to obtain initial information, process it, analyze and evaluate it. The collected information must be complete, reliable, timely, prompt, and presented in convenient form for further usage. At the same time, the costs of collecting, processing, transmitting, and storing information. When planning and forecasting in land management, the following types and sources of information are used: geoinformation data, statistical and reporting data on the planning object, planning information, regulatory information.

The basis of the economic-mathematical model is a matrix - a special table containing semantic or code designations of the goal function; variables and restrictions; their numerical expression in the form of coefficients or restrictions;

The objective function is an analytical form of expressing the optimality criterion. When modeling, depending on the level of the object (process), global, sectoral, local and particular optimality criteria are distinguished;

The size of the matrix is ​​determined by the list of variables. Land areas are used as variables; indicators of production activity of the agricultural industry (for crop production, livestock production in general; for agricultural crops; for types of livestock).

Finding optimal solutions when forecasting depends on the correct definition of the composition of constraints. Constraints are formulated in the form of a system of inequalities and equations expressing production capabilities and the balance of resources.

Restrictions can be basic, which are imposed on all or most variables (land area, working areas, doses of fertilizers, etc.), additional - imposed on individual variables or small groups (volumes of production of certain types of products, consumption by some groups of animals of certain types of feed, etc.) and auxiliary (they do not have independent economic significance, they are used for the correct formulation of economic requirements and mathematical notation).

Various types of economic and mathematical models are used: correlation models and production functions, balance models, optimization models. When developing a land management scheme for an administrative district, the following main economic and mathematical problems are solved: distribution of land in the administrative district by category; optimization of measures for development and intensification of land use; optimization of location, specialization and level of concentration of agricultural production in the administrative region; establishing the optimal size of agricultural organizations; redistribution of land between agricultural organizations, etc. These tasks often consist of blocks, each of which has its own optimality criterion.

For example: the model for optimizing the location, specialization and level of concentration of agricultural production in an administrative region is based on two models: by definition optimal combination branches of agricultural production and to establish the optimal size of land use of agricultural organizations.

This task consists of blocks, which are agricultural organizations.

The following unknowns are used as variables: sown areas of agricultural crops; types and subtypes of land; transformable lands; types of on-farm resources and other variables that take into account the characteristics of the area.

The following groups of restrictions are distinguished:

  • 1. Conditions for the use of land (by area, by quality conditions) and the possibility of their transformation.
  • 2. Ratio of land areas.
  • 3. Agrobiological and zootechnical conditions for agricultural production.
  • 4. Restrictions on the production and use of feed.
  • 5. Recommended size of land use of agricultural organizations depending on specialization.
  • 6. Resource restrictions (in terms of product sales, labor costs, monetary costs for technical means, mineral fertilizers, seeds, etc.).
  • 7. Restrictions taking into account the characteristics of settlement, as well as the use of labor and mechanized resources.
  • 8. General district conditions and proportions (balance of distribution of material and technical funds in the district, number of people employed in agriculture and the total population in the district, etc.).

As a rule, the minimum of reduced costs for a fixed volume of production is used as an optimality criterion when solving this problem.

As a result of solving the problem, the following is established: the composition and ratio of lands for individual land uses and for the region as a whole; areas of land subject to improvement, development and transformation; sown areas of agricultural crops; structure of the animal herd, production and consumption of feed; inter-farm and intra-farm distribution of industries in the region; specialization and volume of production in agricultural organizations and their associations; balances of funds in the region as a whole and in the context of agricultural organizations; distribution of one-time funds between agricultural organizations.