Não foi possível enviar o arquivo. Será algum problema com as permissões?
Diferenças

Diferenças

Aqui você vê as diferenças entre duas revisões dessa página.

Link para esta página de comparações

Ambos lados da revisão anterior Revisão anterior
Próxima revisão
Revisão anterior
Próxima revisão Ambos lados da revisão seguinte
disciplinas:verao2007:exercicios [2007/02/17 22:32]
paulojus
disciplinas:verao2007:exercicios [2007/02/17 22:48]
paulojus
Linha 21: Linha 21:
   - Inspect [[http://​leg.ufpr.br/​geoR/​tutorials/​Rcruciani.R|an example geoestatistical analysis]] for the hydraulic conductivity data.   - Inspect [[http://​leg.ufpr.br/​geoR/​tutorials/​Rcruciani.R|an example geoestatistical analysis]] for the hydraulic conductivity data.
   - (4) Consider the following two models for a set of responses, <​m>​Y_i : i=1, ... ,​n</​m>​ associated with a sequence of positions <​m>​x_i:​ i=1,​...,​n</​m>​ along a one-dimensional spatial axis <​m>​x</​m>​.   - (4) Consider the following two models for a set of responses, <​m>​Y_i : i=1, ... ,​n</​m>​ associated with a sequence of positions <​m>​x_i:​ i=1,​...,​n</​m>​ along a one-dimensional spatial axis <​m>​x</​m>​.
-    - <​m>​Y_{i} = alpha + beta x_{i} + Z_{i}</​m>,​ where <​m>​alpha</​m>​ and <​m>​beta</​m>​ are parameters and the <​m>​Z_{i}</​m>​ are mutually independent with mean zero and variance <​m>​sigma^2 _{Z}</​m>​.+    - <​m>​Y_{i} = alpha + beta x_{i} + Z_{i}</​m>,​ where <​m>​alpha</​m>​ and <​m>​beta</​m>​ are parameters and the <​m>​Z_{i}</​m>​ are mutually independent with mean zero and variance <​m>​sigma^2_{Z}</​m>​.
     - <​m>​Y_i = A + B x_i + Z_i</​m>​ where the $Z_i$ are as in (a) but //A// and //B// are now random variables, independent of each other and of the $Z_i$, each with mean zero and respective variances $\sigma_A^2$ and $\sigma_B^2$.\\ For each of these models, find the mean and variance of $Y_i$ and the covariance between $Y_i$ and $Y_j$ for any $j \neq i$. Given a single realisation of either model, would it be possible to distinguish between them?     - <​m>​Y_i = A + B x_i + Z_i</​m>​ where the $Z_i$ are as in (a) but //A// and //B// are now random variables, independent of each other and of the $Z_i$, each with mean zero and respective variances $\sigma_A^2$ and $\sigma_B^2$.\\ For each of these models, find the mean and variance of $Y_i$ and the covariance between $Y_i$ and $Y_j$ for any $j \neq i$. Given a single realisation of either model, would it be possible to distinguish between them?
   - (5) Suppose that $Y=(Y_1,​\ldots,​Y_n)$ follows a multivariate Gaussian distribution with ${\rm E}[Y_i]=\mu$ and ${\rm Var}\{Y_i\}=\sigma^2$ and that the covariance matrix of $Y$ can be expressed as $V=\sigma^2 R(\phi)$. Write down the log-likelihood function for $\theta=(\mu,​\sigma^2,​\phi)$ based on a single realisation of $Y$ and obtain explicit expressions for the maximum likelihood estimators of $\mu$ and $\sigma^2$ when $\phi$ is known. Discuss how you would use these expressions to find maximum likelihood estimators numerically when $\phi$ is unknown.   - (5) Suppose that $Y=(Y_1,​\ldots,​Y_n)$ follows a multivariate Gaussian distribution with ${\rm E}[Y_i]=\mu$ and ${\rm Var}\{Y_i\}=\sigma^2$ and that the covariance matrix of $Y$ can be expressed as $V=\sigma^2 R(\phi)$. Write down the log-likelihood function for $\theta=(\mu,​\sigma^2,​\phi)$ based on a single realisation of $Y$ and obtain explicit expressions for the maximum likelihood estimators of $\mu$ and $\sigma^2$ when $\phi$ is known. Discuss how you would use these expressions to find maximum likelihood estimators numerically when $\phi$ is unknown.
-  - (6) Is the following a legitimate correlation function for a one-dimensional spatial process $S(x) : x \in \IR$? Give either a proof or a counter-example.  +  - (6) Is the following a legitimate correlation function for a one-dimensional spatial process $S(x) : x \in \IR$? Give either a proof or a counter-example.\\  
-<m> rho(u) = delim{lbrace}{matrix{2}{1}{{1-u : 0 <= u <= 1}{0  :  u>​1}}}{}</​m>​ +<m> rho(u) = delim{lbrace}{matrix{2}{1}{{1-u : 0 <= u <= 1}{0  :  u>1}}}{} </m>\\ 
-  - (7) Consider the following method of simulating a realisation of a one-dimensional spatial process on $S(x) : x \in \IR$, with mean zero, variance 1 and correlation function $\rho(u)$. Choose a set of points $x_i \in \IR : i=1,​\ldots,​n$. Let $R$ denote the correlation matrix of $S=\{S(x_1),​\ldots,​S(x_n)\}$. Obtain the singular value decomposition of $R$ as $R = D \Lambda D^\prime$ where $\lambda$ is a diagonal matrix whose non-zero entries are the eigenvalues of $R$, in order from largest to smallest. Let $Y=\{Y_1,​\ldots,​Y_n\}$ be an independent random sample from the standard Gaussian distribution,​ ${\rm N}(0,1)$. Then the simulated realisation is S = D \Lambda^{\frac{1}{2}} Y +  - (7) Consider the following method of simulating a realisation of a one-dimensional spatial process on $S(x) : x \in \IR$, with mean zero, variance 1 and correlation function $\rho(u)$. Choose a set of points $x_i \in \IR : i=1,​\ldots,​n$. Let $R$ denote the correlation matrix of $S=\{S(x_1),​\ldots,​S(x_n)\}$. Obtain the singular value decomposition of $R$ as $R = D \Lambda D^\prime$ where $\lambda$ is a diagonal matrix whose non-zero entries are the eigenvalues of $R$, in order from largest to smallest. Let $Y=\{Y_1,​\ldots,​Y_n\}$ be an independent random sample from the standard Gaussian distribution,​ ${\rm N}(0,1)$. Then the simulated realisation is <​latex>​$S = D \Lambda^{\frac{1}{2}} Y$</​latex> ​
   - (7) Write an ''​R''​ function to simulate realisations using the above method for any specified set of points $x_i$ and a range of correlation functions of your choice. Use your function to simulate a realisation of $S$ on (a discrete approximation to) the unit interval $(0,1)$.   - (7) Write an ''​R''​ function to simulate realisations using the above method for any specified set of points $x_i$ and a range of correlation functions of your choice. Use your function to simulate a realisation of $S$ on (a discrete approximation to) the unit interval $(0,1)$.
   - (7) Now investigate how the appearance of your realisation $S$ changes if in the equation above you replace the diagonal matrix $\Lambda$ by truncated form in which you replace the last $k$ eigenvalues by zeros.   - (7) Now investigate how the appearance of your realisation $S$ changes if in the equation above you replace the diagonal matrix $\Lambda$ by truncated form in which you replace the last $k$ eigenvalues by zeros.

QR Code
QR Code disciplinas:verao2007:exercicios (generated for current page)