Decentralization of Mplus moderating effect test
If you can't sell it, you can only keep it by yourself. Let's wait for appreciation or take over. Of course, the probability of this happening is very small. If it happens, there may be the following possibilities:
1. China, the United States or the European Union suddenly announced the ban on bitcoin and its circulation
2. Bitcoin exposes fatal weaknesses and defects, which are difficult to overcome, especially security factors
3. Bitcoin has not been used as a killer for a long time, and its application scenarios are strictly limited, so people graally lose information about bitcoin
4. The emergence of a better alternative to bitcoin or the global joint issue of a virtual currency has won global recognition
On June 14, the first native application of blockchain - "Du universe" app was launched online. This is another important action of network in the field of blockchain
it is reported that Du universe is currently recruiting content procers. The earlier you join, the greater the profit. In the future, "Du universe" will also open the third-party developer platform, introce more applications and play methods, complete value transfer and interaction through their own passes, further improve the ecological construction, and better serve users
since 2018, the network has been acting frequently in the field of blockchain. In April, it released the totem of blockchain original image service platform, which adopts the blockchain right registration network independently developed by the network to provide one-stop services for image rights confirmation, monitoring and protection. Then in May, the blockchain network operating system super chain was released. The operating system is compatible with the developer ecology of bitcoin and Ethereum. It can not only plug in and pull out the consensus mechanism to solve the current energy consumption problem, but also support 100000 concurrent transactions in a single chain
Wang Jianfeng
(Institute of mechanics, Chinese Academy of Sciences, Beijing, 100080)
[Abstract] in nature, society, economy and other fields, mutation phenomenon is very common. If the output sequence of the system changes suddenly at an unknown time, the time is called the change point. The purpose of change point statistical analysis is to judge the existence of change point, determine its location and number, etc. The existing change point analysis includes homogeneous change point analysis, probabilistic change point analysis and model parameter change point analysis. In this paper, a new concept of change point of slope is proposed. It refers to the point where the acceleration (deceleration) of curve slope changes the most. In this paper, a second-order difference method of regression coefficient for finding a single change point of slope is proposed. It can find the "turning point" of the curve with monotonicity and concavity. Examples show that the method is simple, intuitive and effective
[Key words] change point analysis slope change point regression coefficient. The study of whether and when sudden changes occur is helpful to grasp the evolution law of events or processes, especially the occurrence and development law of disasters, so as to provide the basis for the prediction, prevention and control of disasters
The output sequence of thesystem changes suddenly at an unknown time, which is called the change point. The purpose of change point statistical analysis is to judge and test the existence, location and number of change points, and estimate the jump of change points. Change point statistical analysis is a powerful tool for quantitative analysis of various monitoring data and study of the laws of various geological disasters
change point analysis is divided into mean change point analysis, probability change point analysis and model parameter change point analysis [1]. When the mean value (or probability distribution, or a model parameter) of the data changes significantly before and after a certain time, the time is called the mean value (or probability, or a model parameter) change point. However, in geological problems, there is a curve rising graally, which suddenly accelerates after a certain time; Sometimes, a curve begins to speed up the decline, and after a certain point, it suddenly slows down and graally becomes flat. Some curves begin to slow down, and suddenly turn to fast down along the diagonal after reaching a certain point. Because the "turning point" of this sudden change is an important characteristic point, which is often associated with the specific physical meaning of a specific problem, it is important to accurately determine the turning point. This kind of "turning point" is called slope change point of curve in this paper
in this paper, how to find the boundary point between the second stage and the third stage of the surface creep curve of rock and soil mass is proposed to calculate the correlation distance of soil properties δ or θ How to find the "turning point" of the sudden slowing down of the variance rection function, and how to calculate the preconsolidation pressure P according to the e-lgp curve. The problem of slope change point analysis is discussed
2 basic principle of finding a single slope change point. Assuming that there is only one slope change point in a given curve, the problem is how to find a simple, quantitative and accurate method to determine the change point
Generally, most of the measurement data are discrete data point pairs, and when the measurement time interval is long, it is not easy to form a continuous curve that can truly reflect the process change. Therefore, most observation series are not easy to be expressed by curve equation, so the slope of each point can not be calculated by derivative method of calculus, and the slope of the curve in a short time interval (or distance) on both sides of a point (or distance point) can only be approximately calculated by linear regression coefficient method of some successive data points on both sides of a point. This is because in a short time interval (or distance), the curve is approximate to a straight line. The difference between the slopes of the curves on both sides of a point can reflect the change amplitude of the slopes on both sides of the point. It can be said that the first-order difference is used. But the "turning point" to be found here is not the point where the slope changes the most, but the point where the local acceleration (deceleration) of the slope changes the most. When the monitoring time is equal time interval (or equal interval), the second-order difference of slope change amplitude reflects the acceleration (deceleration) of slope change. By finding the local maximum of acceleration (deceleration) of slope change, the unit interval where the slope change point falls can be found. Then, the estimation of the change point of the slope can be obtained quantitatively by the method similar to the mode of the grouped data3 methods and steps to find a single slope change point
The cumulative displacement monitoring data of asamushi landslide on Tohoku railway line in Japan [21] is shown in Table 1 (the data are measured in Fig. 1)
Table 1 cumulative displacement of asamushi landslide on Tohoku railway line in Japan
Figure 1 change point analysis results of time cumulative displacement curve of asamushi landslide on Tohoku railway line in Japan
this is the displacement time curve measured from the landslide surface, which obviously contains the data of the second and third creep stages. In the second stage, the curve segment is similar to a straight line, reflecting the characteristics of constant velocity creep; However, the third stage is the accelerated creep stage, and the curve increases obviously (Fig. 1). That is to say, in this example, we know that there is only one change point of slope. As long as we can find the point with the largest local acceleration change of slope, it means that we have found this change point of slope. The second-order difference method of regression coefficient for finding this change point is introced step by step
(1) determination of exploration point: since the monitoring data are equidistant, the midpoint of two adjacent observation time points is selected as the exploration point to form the exploration point sequence. For example, the sequence of exploration points in example 1 is T < sub > I < / sub > = 21.25, 21.75, 22.25, 22.75, 23.25,..., 26.75
(2) the sliding window is constructed with each exploration point as the center to calculate the slope of the curve around the exploration point (that is, the linear regression coefficient of several data points before and after the exploration point). As the number of data points n participating in the regression will affect the value of the regression coefficient, the same number of (n) data points are taken before and after the exploration point to form a sliding window, so that the slope before and after can be compared under the same conditions. Because the curve only approximates a straight line in a short time (or a small distance), n can not be too large. Here we take n = 2,3,4 to form three sets of sliding windows
(3) in the sliding window of T < sub > I < / sub >, make linear regression for N data points in front (or left) of the exploration point T < sub > I < / sub >, and calculate the regression coefficient, which is recorded as < inlinemediaobject > < imageobject > < imagedata role = 3 "fileref =. / image / figure-0041-01. JPG > < title > < / Title >
< / picdesc > < / imagedata > < / imageobject > < / inlinemediaobject > (T < sub > I < / sub >); Similarly, the N data points after the exploration point T < sub > I < / sub > (or right) are linearly regressed, and the regression coefficient is calculated, which is recorded as < inlinemediaobject > < imageobject > < imagedata role = "3" fileref =. / image / figure-0041-02. JPG > < title > < / Title >
< / picdesc > < / imagedata > < / imageobject > < / inlinemediaobject > (T < sub > I < / sub >). Obviously, when n = 2, the calculated regression coefficient reflects more local slope behavior, but not enough overall slope behavior. Moreover, because the number of points is too small (only two points, can connect a straight line), the randomness is large, and the statistical significance is not enough. On the contrary, when n = 4, the regression coefficient reflects the slope behavior of the whole better, but reflects the slope behavior of the local worse, and because there are more points, the statistical significance is stronger and less randomness; N = 3 is between the two. Therefore, we should pay more attention to the results when n = 4. Therefore, when n = 2,3,4, < inlinemediaobject > < imageobject > < imagedata role = "3" fileref =. / image / figure-0041-03. JPG > < title > < / Title >
< / picdesc > < / imagedata > < / imageobject > < / inlinemediaobject > is weighted and averaged, and the weight is n < sup > 2 < / sup >. Therefore, when T < sub > I < / sub >, < inlinemediaobject > < imagedata role = "3" fileref =. / image / figure-0041-04. JPG > < title > < / Title >
< / picdesc > < / imagedata > < / imageobject > < / inlinemediaobject > all exist, The weighted average is:
(4) for each exploration point T < sub > I < / sub >, calculate the difference of < inlinemediaobject > < imageobject > < imagedata role = "3" fileref =. / image / figure-0041-06. JPG > < title > < / Title >
< / picdesc > < / imagedata > < / imageobject > < / inlinemediaobject >, and record it as & # 8710; S (T < sub > I < / sub >), i.e.
collection of papers on technical methods of geological hazard investigation and monitoring; S (T < sub > I < / sub >) can be said to be the increment (or change) of the slope of the curve before and after the T < sub > I < / sub > point. It can also be understood as the first order difference of the slope of the curve after and before the T < sub > I < / sub > point. Its size reflects the increase of the slope at the T < sub > I < / sub > point
(5) to { 8710; S (T < sub > I < / sub >) sequence and then calculate the second-order difference, that is:
collection of papers on geological hazard investigation and monitoring technology
the size of the second-order difference reflects the change of acceleration of curve slope in the interval (T < sub > I-1 < / sub >, T < sub > I < / sub >). A sequence of △ sup > 2 < / sup > s (T < sub > I < / sub >) is also formed
(6) along the T < sub > I < / sub > sequence from small to large, find the maximum value in the < sup > 2 < / sup > s (T < sub > I < / sub >) sequence (it is larger than the former and the latter two values). Let its corresponding interval be (T < sub > I-1 < / sub >, T < sub > I < / sub >), then it should be the interval of the slope change point. Then the two adjacent intervals before and after it (T < sub > I-2 < / sub >, T < sub > I-1 < / sub >) and (T < sub > I < / sub >, T < sub > I + 1 < / sub >) and their corresponding values of △ sup > 2 < / sup > s (T < sub > I-1 < / sub >) and △ sup > 2 < / sup > s (T < sub > I + 1 < / sub >) are used, and the accurate value of the slope change point t * can be obtained by linear interpolation similar to the mode of grouped data, The calculation formula is as follows:
collection of papers on technical methods of geological disaster investigation and monitoring
it can be seen from Figure 2 that the maximum second-order difference is < sup > 2 < / sup > s (T < sub > I < / sub >) = 23.63, its corresponding interval is (23.25, 23.75), and its two adjacent intervals are (22.75, 23.25) and (23.75, 24.25). Their corresponding second-order difference values are △ sup > 2 < / sup > s (T < sub > I-1 < / sub >) = 18.80, △ sup > 2 < / sup > s (T < sub > I + 1 < / sub >) = 11.47
According to formula (4),
collection of papers on geological hazard investigation and monitoring technology and methods. It is of great practical significance to find out the dividing point. It can remind the observer when to start the intensive monitoring of the landslide dynamics, and also make it possible for the field engineer to make a more accurate prediction of the landslide occurrence time with only the third stage data points. Therefore, the determination of slope change point is helpful to the division of the second and third stages of the creep curve of landslide. In addition, it can also help to find the "turning point" of the curve from rapid descent to sudden slowing down, and the curve from gentle descent to rapid descent along the diagonal
The spatial resolution of existing satellite remote sensing sensors ranges from 1 m to tens of kilometers. Different resolution data reflect the changes of landscape structure in different scales. Different remote sensing data with different resolution are needed for different research targets. But what are the criteria for selecting appropriate spatial resolution data? Ideally, we should select the spatial resolution data that contains the required information and the minimum amount of data (Atkinson and Curran, 1997). However, it is not a simple problem how to determine the spatial resolution of the least amount of data which contains the required information
The information expressed byspatial variables exists in the relationship between the measurements of variables, which can be expressed by spatial dependence or spatial variation (Atkinson and Curran, 1997). When we are concerned about the spatial distribution characteristics of a variable, the spatial variation between samples determines the accuracy of estimation and the final information to be displayed (Dungan et al., 1994). The accuracy and information of estimation are the reference standards for selecting spatial resolution (Atkinson, 1995). However, for remote sensing data, because its samples cover the whole study area, spatial change only determines the information to be displayed (Atkinson, 1997). Therefore, in order to select the appropriate spatial resolution of remote sensing data, we need to understand the change of remote sensing information with the change of spatial resolution
Strahler (1986) thinks that the scene of remote sensing image is composed of mosaic or continuous distributed discrete objects covering the whole area. When the spatial resolution of remote sensing data is much smaller than that of the target, there is a great similarity between adjacent pixels; As the spatial resolution of remote sensing image graally becomes thicker, the similarity between adjacent pixels graally weakens. When the pixel size is equal to the target size of the scene, because the adjacent pixels represent different targets, the similarity of adjacent pixels is the weakest; When the pixel is larger than the target size of the scene, the similarity between the adjacent pixels begins to increase because of the information of different target objects. Local variance is an index to measure the similarity between adjacent pixels. Suppose Z (x < sub > ij < / sub >) is the pixel value at x < sub > ij < / sub > in the image, I and j are the row and column numbers in the image, then (2n + 1) with x < sub > ij < / sub > as the center × 2 m + 1) the local variance in the size window is:
Research on the uncertainty of remote sensing information, μ< Sub > ij < / sub > is centered on x < sub > ij < / sub >, with (2n + 1) × 2m + 1) is the average value of pixels in the size window; Taking each pixel in the image as the center, the local variance of the window is calculated, and then the average value is calculated to calculate the average local variance of the whole image under the window
Woodcock and Strahler (1987) proposed a method to determine the optimal spatial resolution by using local variance. When the average local variance of remote sensing image reaches the maximum, the spatial resolution of the image is optimal. One of the problems in using local variance to determine the optimal resolution of an image is that in the calculation of local variance of an image, e to the boundary effect, there are always m or N pixels within the boundary, and the local variance around them is not calculated (Atkinson and Curran, 1997) In recent years, spatial statistics, especially geostatistics, has been used to study the scale effect of remote sensing informationin geostatistics, semivariance is a measure of spatial variability (or spatial dependence) of variables, which is obtained by calculating the variogram or semi variogram of variables. Different variogram reveals different spatial variation characteristics of variables. Atkinson (1999) pointed out that the variogram of variable is related to the size of support. In Geostatistical terms, the size of the support refers to the size of the measurement unit of the variable. In remote sensing data, the support corresponds to the spatial resolution. Because the spatial variation of the variable varies with the size of the support, the appropriate spatial resolution can be determined by studying the structure of the variogram
in the theory of regionalized variables in geostatistics, the observation of variables on a certain subset V can be expressed by the following model:
Research on the uncertainty of remote sensing information
where Z (x) is a random function (RF) defined at x position in two-dimensional space; M < sub > V < / sub > is the local average of Z in region v; E (x) is a random function with zero mean. When the assumption of intrinsic stationarity is satisfied, there are:
Research on the uncertainty of remote sensing information, γ h) As a variogram, it is a function of the semivariance of a regionalized variable varying with step H. The structure of variogram characterizes the spatial dependence of variables
The variogram defined by formula (7-7) is the variogram on the point support. But in practice, the observation is often in a certain range of support. The variogram on a certain size of support V can be estimated by regularization of variogram on point support (Journel and huijbregts, 1978):
Research on the uncertainty of remote sensing information
in the formula, it is the average point variogram between two V-size supports whose center distance is h, representing the spatial variation between supports; Is the mean point variogram within the support of size V, representing the spatial variation within the support. It can be seen from equation (7-8) that the spatial variation of regionalization variable is composed of two parts: the spatial variation of region and the spatial variation within support
for remote sensing data, because all measurements are on the support of pixel size, we can not directly get the point variogram. However, the experimental variogram on support V can be obtained from the measured sample data on support v. Let Z be the observation on the support of V with x < sub > 1 < / sub >, x < sub > 2 < / sub >,... As the center, then the experimental variogram of variable V is:
Research on uncertainty of remote sensing information
the experimental variogram can be estimated by de regularization. Experimental variogram de regularization is a complex iterative process. Curran and Atkinson (1999) introced the de regularization process of variogram in detail
figure 7-2 significance of parameters in typical Variogram. The commonly used variogram models include exponential model, spherical model and Gaussian model (Deutsch and Journel, 1998). The parameters of the model, including range, sill and nugget, determine the spatial variation structure of variables (Fig. 7-2). For example, the expression of spherical model is:
Research on uncertainty of remote sensing information
the parameters C < sub > 0 < / sub >, C < sub > 1 < / sub > and a in equation (7-10) represent the nugget value, base and range of variogram respectively. With the increase of H, the semivariance of variables also increases. When the semivariance reaches the maximum, h is the range of the variogram. This maximum semivariance is called the base. The nugget value is the semivariance when h is zero. Generally, the base value represents the structural variance of the variable itself, and the nugget is mainly caused by the measurement error (Atkinson, 1995), while the range represents the range of spatial dependence of the variable, and the variables between two points whose distance is greater than the range no longer have spatial dependence
by analyzing the variogram parameters of remote sensing image, we can explore the change of information in image with image resolution. Atkinson et al. (1997, 1999) proposed a method to select the optimal resolution by calculating the change of half variance when the spatial step size is equal to one pixel under different pixel sizes. The experimental variogram of different resolution images was calculated by formula (7-9). When h is equal to the image resolution in equation (7-9), the calculated half variance is the half variance when h of the resolution is equal to one pixel. The resolution of the image is taken as the abscissa, and the half variance when h of different resolutions is equal to one pixel is taken as the ordinate. When the half variance reaches the maximum with the increase of the pixel, the corresponding pixel size is the optimal image resolution. Obviously, this method has the same meaning as the mean local variance method. When the image resolution is small, the adjacent pixels have a great spatial dependence, so the half variance is small; When the image resolution is equal to the size of the target in the image, there is no spatial dependence between adjacent pixels, and the semi variance reaches the maximum. The semi variance of images with different resolutions can be calculated by calculating the variogram of images with different resolutions, or the variogram of images with different resolutions can be obtained by regularizing the point variogram through formula (7-8). The advantage of the latter is that the variogram of image with arbitrary resolution can be obtained (Atkinson and Curran, 1997; 1999 Atkinson et al. (1997) compared the optimal resolution of the image selected by this method and the local variance method, and obtained similar results. However, the problem of this method is that the calculation process, including the de regularization of the experimental variogram and the regularization of the point variogram, is a very complex process, which requires continuous iterative operation, and some parameters need to be given artificially, so it is not convenient for practical application
as shown in equation (7-8), the variogram of the variable on the support V is composed of the part representing the spatial variation of the region and the spatial variation within the support. For remote sensing data, the spatial variation in a region refers to the spatial variation between pixels, while the spatial variation within a support is the spatial variation within pixels. Generally, with the increase of pixel scale, the spatial variation and semi variance of pixel increase graally. When the variogram representing the spatial variation within a pixel is fitted by spherical, exponential or Gaussian models, the range of the variogram represents that there is no spatial dependence of information between the points whose distance is greater than the range. Wang Guangxing et al. (2001) used the range of intra pixel variogram as an index to select the appropriate resolution. The premise of this method is to assume that there are many observations in a certain size pixel. When the pixel is small, it is difficult to calculate the variogram because there are few internal observations, or even if there are enough observations, because the spatial dependence between the points in a small pixel is very strong, the calculated variogram has no obvious range; Moreover, it is very complicated to calculate a variogram in each pixel and average it
according to Strahler's (1986) model of scene in remote sensing image, the scene in remote sensing image is composed of a series of discrete objects embedded with each other. Different targets have different spectral radiation or reflection characteristics, so the remote sensing image can reflect the spatial structure of the scene in the image. When a remote sensing image is expanded from a fine resolution scale to different resolutions, the most basic criterion for selecting the optimal resolution is to keep the structural features of the original image. If the same pixel contains different targets, the spatial structure of the original image will be blurred. Therefore, to maintain the spatial structure of the original image, the maximum resolution
1、 The pollution of agricultural activities on groundwater at present, the problem of groundwater pollution caused by pesticides has caused more and more concern. Some people think that the pollution of pesticides will become one of the main pollution problems. The pesticides that pollute the environment include herbicides, pesticides and fungicides. In addition, groundwater is also vulnerable to pollution sources in agricultural proction, such as nghill, farm sewage, waste, disinfectant water and silage liquid. The number of pesticides banned by international water management agencies is increasing. For example, DDV is banned, aldrin, dieldrin and chlordane are stopped, and iodobenzonitrile and bromoxynil are restricted. Pesticides and nitrate pose a serious threat to the quality of groundwater, which is more difficult to eliminate than the surface pollution
2. Pollution and treatment of seawater to groundwater. Salt water immersion can rece the quantity and quality of agricultural procts, and also harm the survival of existing fresh water animals and plants. When the concentration of salt water is high, it can cause human physiological effects and proce symptoms of hypertension
It is a common phenomenon that seawater intrusion is caused by over exploitation of groundwater in coastal areas of China. There is a large area of saline water around the Bohai Sea. The Chinese Academy of Sciences has carried out the ground electrical sounding work for the seawater intrusion control in Laizhou City, Shandong Province. Figure 5-2-1 is the isosection of apparent resistivity, which shows the seawater intrusion channel (low resistance) and freshwater ancient channel (high resistance). Based on the measured results, the resistivity map and curve type distribution map are compiled, and the serious seawater intrusion area is divided ρ< sub>s=2~17 Ω· m) Mild invasion zone ρ< sub>s=17~30 Ω· m) And uninvased areas ρ< sub>s=30~100 Ω· m The curve types of QQ and KQ were serious invasion areas, h were mild invasion areas, and K and a were non invasion areas. See table 5-2-1 for invasion degree according to electrical sounding. In table 5-2-1, zone III is easy to treat, zone I is difficult to treat, and brine resources have been developed in some places. Due to the serious over exploitation of groundwater in Laizhou City, there are two large groundwater funnels in Wanghe River and zhuqiaohe River, with the central water level of - 15 m and - 10 m respectively. In recent years, the invasion area has been expanded to 435 km < sup > 2 < / sup >, and more than 6000 wells have been scrapped, resulting in the loss of irrigation groundwater and secondary salinization of 50000 mu of cultivated land. Therefore, the engineering measures to control seawater intrusion are put forward
Fig. 5-2-1 apparent resistivity contour map
table 5-2-1 relationship between seawater intrusion degree and resistivity
3. Investigation on Soil salinization and drought control. Soil salinity can be divided according to electrical conctivity, and can be determined by aerial or surface electrical methods, while the depth of water table is generally determined by seismic refraction method. Table 5-2-2 shows the soil salinity classification in Australia and its relationship with crop salt tolerance and root soil conctivity
table 5-2-2 soil salinity classification in Australia and its relationship with crop salt tolerance and root soil conctivity. In order to control drought, soil and water conservation and recharge measures are proposed, which requires understanding of soil thickness, weathering crust thickness and bedrock unlation. For this reason, resistivity sounding was systematically carried out in the typical rainless area of the state. According to the results of electrical sounding, the contour maps of soil thickness and bedrock depth are drawn. In order to know the thickness change of weathering layer and the existence of water filled fracture, geoelectric section is made. Soil and water conservation measures should be taken in the areas with thin soil, and the areas with thick weathered layer and deep basement should be taken as artificial recharge sites
For soil pollution, the first thing is to control and eliminate pollution sources. Because the soil has a certain purification capacity; Therefore, the migration and transformation of pollutants should be controlled to prevent them from entering the food chain(1) control and eliminate soil pollution sources
1) control and eliminate instrial "three wastes" emissions. The closed-circuit circulation and non-toxic process should be popularized to rece or eliminate pollutants. The instrial "three wastes" are recycled and purified, and the quantity and concentration of emissions are reced to meet the standards
(2) strengthen the monitoring and management of soil sewage irrigation area. Understand the composition, content and dynamics of pollutants, control the amount of sewage irrigation, and avoid abusing sewage irrigation to cause soil pollution Control the use of chemical pesticides. We should prohibit or restrict the use of highly toxic and high resie pesticides, develop highly effective, low toxic and low resie pesticides, and develop biological pesticides. Reasonable application of pesticides, the establishment of a safe interval, the formulation of pesticide resies (4) rational application of chemical fertilizer. In order to increase proction, it is necessary to apply chemical fertilizer reasonably. However, excessive application will rece the yield and quality of crops, cause too high nitrate content in crops, affect the health of people and livestock, and increase the content of heavy metals, resulting in soil pollution(2) increasing soil capacity and improving soil purification capacity
increasing soil organic matter content, sand mixing and improving sandy soil can increase and improve the type and quantity of soil colloid, increase the adsorption capacity and adsorption capacity of soil to toxic substances, so as to rece the activity of pollutants in soil. The discovery, isolation and cultivation of new microbial species to enhance biodegradation is also a very important part of improving soil purification ability
(3) application of geophysical prospecting method in soil pollution source investigation. Therefore, it is necessary to measure the magnetic susceptibility of materials, sediment and soil к And remanence M can trace the source of pollution of lake, ocean and soil, while soil m has a positive correlation with iron in pollutants. The iron content in Lake, ocean and soil can be estimated from m value, and the pollution degree of soil, water body, lake and shallow sea sediment can be judged. Fig. 5-2-2 shows the magnetic survey results of shallow sea silt pollution by Athens iron and steel works in Greece. The magnetic susceptibility of the surface layer at 0.3 km away from the iron and steel works is ten times of that at 1-3 km away. Guo youzhao measured the magnetic properties of topsoil near a factory in Langfang, Hebei Province, and obtained similar results. On both sides of Beijing Tianjin railway, the increase of soil magnetism caused by domestic waste pollution was also detected
Fig. 5-2-2 remanence measurement results of Marine Sediments around Athens iron and steel plant in Greece
II. Application in environmental pollution assessment and monitoring
1. Pollution assessment and monitoring of groundwater. Because of the obvious difference of resistivity between polluted water and unpolluted water, if it is not buried deep and has a certain volume, it can be detected by electrical method. For example, in order to understand the pollution of coal ash to groundwater, 33 observation wells were drilled at socqueville coal ash mp in Wisconsin, USA. at the same time, electric logging was concted beside the wells, and the line was perpendicular to the flow direction of groundwater. Electric sounding and water sampling are carried out at the same time, once a month. The cross-section map of water pollution range made by electric sounding is much more detailed than that obtained by only a few monitoring wells
the University of Waterloo in Canada studied the pollution of ethylene (C < sub > 2 < / sub > CL < sub > 4 < / sub >) for dry cleaning and metal cleaning of clothing. Every 1 l ethylene discharged can pollute 1000 × 10 < sup > 4 < / sup > liter. The steel plate is driven into the ground around the experimental site to cut off the hydraulic connection inside and outside the site. Ethylene is injected into the site through the shallow hole. Neutron, density and inction logging are carried out in the surrounding monitoring holes. The surface and borehole resistivity are measured regularly, and the ground penetrating radar profile measurement is carried out. The results show that because chlorine in ethylene floats to get neutrons, there is a negative peak on the neutron log curve. The high concentration of ethylene shows obvious reflection on the radar profile. According to the resistivity anomaly, the movement of ethylene with time can also be seen
Oil pollution is the most common organic pollution. At an underground oil leakage site in Nan'ao, em31 electromagnetic instrument and geological radar were used to detect the oil leakage location, but they were invalid. Later, the electromagnetic wave profiling method was used to delineate the pollution area, which was confirmed by drilling and trenching. During the operation, the vertical transmitting coil and the horizontal receiving coil keep zero coupling state, and move along the measuring line with equal coil spacing, which leads to obvious low value anomaly of the vertical component of the magnetic field in the pollution range The research of Russian scientists shows that the potential gradient is a sign of the degree of air pollution. The changes of atmospheric components near the ground surface caused by instry and transportation, such as the changes of chemical components, the increase of st, solid and liquid smoke, have a great impact on the decomposition and migration of electric charge, which leads to the decrease of atmospheric conctivity and the increase of average potential gradient, resulting in the increase of vertical component of electric field strength Ethe high content of sulfur dioxide in the atmosphere leads to the formation of acid rain, and sulfur dioxide mainly comes from coal combustion. The sulfur content in coal can be analyzed by X-ray fluorescence measurement. Using < sup > 55 < / sup > Fe as X-ray source and gas proportional scintillation tube as detector, the energy resolution of sulfur spectrum line is 11.8%, and the detection limit can reach 0.15%
According to the statistics of the United States, 82% of the radiation dose received by people comes from natural sources, while 55% comes from radon. Radon, a radioactive gas, is easily attached to st particles, which can cause lung cancer if inhaled by human body. Radon disaster is widely distributed. If it is carried out nationwide, it will be time-consuming and difficult to achieve. We know that radon is the decay proct of radium, and radium is formed by the decay of uranium. Therefore, it is important to divide the areas with high uranium content (the average content is 2.5-10 times), such as granite, gneiss, rhyolite, dacite, carbonaceous shale, etc. Therefore, the assessment and control of environmental pollution caused by natural nuclear radiation can be divided into three stages: wide area, small area and indoor(1) wide area radiation environment monitoring is carried out by the national unified organization in order to understand the overall situation and harm degree of radiation environment in the whole country. Aviation radioactivity is mainly used γ Energy spectrum survey, radiogeochemical survey, regional geology and remote sensing data. The plane contour map and profile map of uranium, thorium and potassium content in airborne radioactivity survey are the main basis to reflect the variation of ground radium content. In the area where there is lack of aeroradiometric data, ground radiometric data and geochemical data of uranium, radium and radon can be used. Regional geological and remote sensing data can provide regional geological and structural background such as rock distribution and fault zone
For example, the national indoor radon sampling survey can be carried out in the regional environmental radon assessment, the unified measurement method can be used to calibrate the detector, and the unified laboratory analysis and data processing can be carried out to study and understand the degree of radon harm to the human environment In the high radiation area determined by the above method, gamma radiometer, activated carbon detector, alpha track detector and high sensitivity radon detector are used to monitor human living space (especially indoor radon) and water source to determine the radiation pollution source. According to statistics, about 8% - 25% of lung cancer deaths are related to radon radiation in the air2. Establish a mathematical model to find the main factors or indicators that affect the consumption differences between urban and rural residents in Cheng
3. Using mathematical model to analyze the consumption difference between urban and rural residents in Cheng in recent years, is it expanding, narrowing or remaining unchanged
4. Consumption structure is the proportion of different types of consumption materials (including labor services) that people consume in the process of consumption under certain socio-economic conditions. From the perspective of consumption structure, please establish a mathematical model about the change of consumption structure of urban and rural residents in Cheng, and forecast and simulate the change of consumption structure of urban and rural residents in Cheng in the next three years according to this model
5. According to the established mathematical model and results, this paper puts forward some suggestions for narrowing the consumption gap between urban and rural residents in Cheng
profit = enterprise performance - enterprise expenditure
enterprise performance = proct sales x proct price
enterprise performance depends on proct sales and price, we need to find a suitable point to cut in, which involves proct pricing strategy
how to set the most competitive price for procts or services
there are two situations:
first, your procts are proced or provided by the same enterprises in the market
1. Follow Pricing:
since your proct pricing is consistent with theirs, you should pay more attention to service or details
2. Super high pricing:
for the same proct, some manufacturers can set a very high price, because they never take the price as their core competitiveness. If your core competitiveness is unique and can't be imitated by outsiders, you can adopt the super high pricing strategy, which is more than 30% higher than the market price to maximize profits
Second, your procts are new in the market, or few domestic enterprises are procing or providing services.