e-book EVA als Instrument zur Steuerung von Unternehmen (German Edition)

Free download. Book file PDF easily for everyone and every device. You can download and read online EVA als Instrument zur Steuerung von Unternehmen (German Edition) file PDF Book only if you are registered here. And also you can download or read online all Book PDF file that related with EVA als Instrument zur Steuerung von Unternehmen (German Edition) book. Happy reading EVA als Instrument zur Steuerung von Unternehmen (German Edition) Bookeveryone. Download file Free Book PDF EVA als Instrument zur Steuerung von Unternehmen (German Edition) at Complete PDF Library. This Book have some digital formats such us :paperbook, ebook, kindle, epub, fb2 and another formats. Here is The CompletePDF Book Library. It's free to register here to get Book file PDF EVA als Instrument zur Steuerung von Unternehmen (German Edition) Pocket Guide.
by members of
Contents:


  1. Office hours
  2. Faculty 16 "Mathematics, Informatics and Statistics": Research Report 1998 -- 2003
  3. moiss y la religin monotesta tres ensayos spanish edition Manual
  4. Table of Contents

This project aims at novel termination proofs for programs in these language fragments such as Parigot's lambda-mu-calculus and tries to incorporate complex datatypes. An alternative technique to CPS translations has been discovered. References: [ 43 , 5 ]. In the project Media of the Future we investigate the use of new types of media and devices for presentation and interaction with information. In the course of the project we investigate commercially available products as well as technologies available in the research community and assess their applicability for specific application domains.

The focus is on input and output devices for virtual and augmented reality, technologies for mobile multimedia applications, and physical user interface. Beyond that we investigate technologies and methods that can be used for implicit user interfaces; of particular interest are sensors for capture and ambient media for peripheral information provision. The evaluation of technologies lead to a laboratory setup where specific technologies can be tested and assessed in detail with regard to specific application domains and user's needs.

Examples of technologies that are available in the lab are an interactive whiteboard Smart-Board , a 3D head-mounted display unit, data gloves for input, and prototypes of physical user interfaces Smart-Its. This is complemented by software and development tools for these systems. Multimedia applications are widely used for presentations, learning systems, and teaching tools. Currently development support and authoring tools are tailored to very specific application domains.

The development of complex interactive multimedia applications is inadequate. Model driven development, reusable components, and interface abstractions - standard practice in software engineering - are poorly supported. In this project it is investigated how methods and tools successfully used in software engineering can be applied to improve the development process of complex multimedia applications. We are interested how such applications are developed, especially the cooperation of designer and software engineers is investigated.

Existing development system for conventional software and authoring tools for multimedia applications have been studied and compared. These results are the basis for the development of new tools and methods for modelling and creating of multimedia applications. In more and more domains the production of audiovisual media is done entirely based on digital technology.

Images, Sounds, and Video are captured, recorded, communicated, and stored in digital form. Additional processing steps are also carried out digitally. The approach changes work processes and allows new forms of media production. In the project digital media production a concept for a research laboratory was created and currently the installation is on its way. The lab allows us to do investigate all steps that are involved in the process of digital media production.

We investigate processes where video, animation, and 3D graphics merge. A further topic is digital audio production and distribution channels. We are especially interested in new models and data formats for digital radio and in particular Internet radio. Based on these technologies we investigate new approach and technologies for digital media production. The entire project is third party funded by the federal ministry of education and research BMBF. The aims of the project are the development of description language for business processes and supporting tools and infrastructure.

The description language will be based on UML Unified Modeling Language and is designed to be appropriate for modelling complex business process on an abstract level. The language will be the basis for analysing business processes with regard to economic aspects. In the course of the project a prototypical reference implementation of an application that processes the description language will be designed and implemented.

The main function of this application is to transform descriptions of business processes into executable programs. These generated programs will be web applications using HTML as mechanism for creating the user interface.

Office hours

Heun started in April The activities concentrate on the establishment of the Bioinformatics Initiative Munich BIM and algorithmic aspects of bioinformatics. In computer science the unit focuses on areas relevant for bioinformatics applications:. The problems involved are hard from an algorithmic viewpoint NP complete and difficult with respect to modeling aspects protein folding.

The project PROSEQO develops and implements heuristic combinatorial optimisation methods for protein structure prediction, determines good and efficiently computable bounds for structure prediction, exploits appropriate biological constraints, defines a formal language for the specification of such constraints e.

The project constructs feature based views of the protein sequence structure space, works on multi-criteria clustering of sequences and structures at the same time and develops methods for classification and alignment of proteins based on various criteria and scoring functions. The effective and discriminative construction and evaluation of those criteria and scoring systems are also done in the project.

The project PSY protein structure analysis aims at integrating advanced students and student staff into a collaborative effort to implement a competitive protein structure prediction server. This involves the development of new methods and derived data resources, the exploitation of available programs, databases and web services, the intelligent combination of these methods, and the effective use of various supercomputer and workstation cluster computer facilities. Within several projects funded by Aventis BEX, two years, 2 full time scientists and by the BMBF BOA, 3 years, 2 scientists new innovative methods for the analysis of expression data and of metabolic and regulatory networks are being developed.

In particular, graph and Petri net models and algorithms combine network analysis and statistical evaluation for the combined interpretation of expression data. Goal is the construction of disease models, the identification of new drug target molecules as starting points for new drug candidates, therapies and diagnostic means.

Methods for mining metabolic and regulatory relationships from scientific texts and data extraction from appropriate databases are developed and used to generate large networks building the basis of new hypotheses concerning diseases and molecular mechanisms. The methods are applied for experimental data and measurements from several disease groups of the pharmaceutical company Aventis Frankfurt.

In addition, methods to design and equip new taylor-made DNA-chips. The project BEX concentrates in this area on the development and implementation of new algorithms and tools. The project BOA focuses on the application and tuning of the methods as well as the development of new concepts for expression data analysis, in particular for osteoarthritis research. BOA provides bioinformatics analysis tools for and within the BMBF molecular medicine Leitprojekt "Therapie und Diagnose der Osteoarthrose" in collaboration with about 20 partners from pharmaceutical industry, Biotech companies, start-ups, research institutes and university hospitals for the analysis of expression data and networks of osteoarthritis.

ProMiner allows for a sensitive and accurate search for gene and protein names in large scientific text bodies such as the about 15 millions of PubMed abstracts and to construct simple co-occurrence networks from the searches. ToPNet allows for visualisation and analysis of networks of various types, e. In contains a broad range of interactive selection, visualisation, searching and analysis tools for such networks and associated annotations.

Theses annotations can be all kinds of functional data, mappings to classifications and ontologies, or measurement data such as DNA-chip expression data or proteomics measurements. ToPNet uses a set of mapping files that facilitates the connections of the various network nodes, genes and proteins, and the annotations and measurements. These methods include pathway scoring, co-clustering of network and expression data , significant area search, pathway queries, and clustering of expression data.

The protein chips developed in the project are based on a innovative detection method for bound protein molecules via mass spectrometry and new binding mechanisms for these proteins via chip-attached RNA molecules so called aptamers. The new chip will be applied to data from the application areas blood coagulation U.


  1. first love japanese edition Manual!
  2. Abandon - Tome 1 : Abandon (French Edition)?
  3. FOR THOSE WHO CARE TO LEARN: A TEXT BOOK FOR TODAY AND TOMORROW?
  4. Fetish for a Blue Skyy!
  5. Prof. Doris Fuchs, Ph.D. - Chair of International Relations and Sustainable Development.

Bonn and viral infections U. Cologne for the identification of new targets and the validation of target candidates. The goal of the LMU project is the development of methods for the joint and combined analysis of networks, mRNA expression and protein expression data. Aptamers are short nucleic acid sequences that have been selected for their specific binding to a given protein target. The aim of PROBIO is to provide the bioinformatics support for finding suitable candidate target and perform the analysis of the results.

A number of different research topics are being addressed within the scope of this project:. Cheminformatics deals with theoretical models and computer science methods and to address problems and data obtained in chemistry applications. A second interest is in developing machine learning methods for classification, regression and feature extraction [ 10 , 51 ]. A third problem is the understanding and simulation of the dynamics of molecular and cellular processes.

Work done on this topic involves the dynamics of Vancomycin, the analysis of co-operativity effects in binding and on protonation states. A major problem concerns protein folding: research is on the development and application of methods for calculating folding trajectories, conformational transitions and predicting protein structures of peptides and small proteins. Artificial receptors are relatively low molecular weight molecules that nevertheless can bind a specific ligand with high affinity and specificity. Future efforts will be focused on a simple theoretical framework for the analysis of gene expression and cellular gene expression dynamics.

Chem- and bioinformatics methods and models will be combined to obtain novel insights into biological processes and find possible applications in the field of drug design and biotechnology. The prediction of the spatial structure of a protein is one of the most important open problems in molecular biology. For the standard combinatorial model the so-called HP model proposed by Kenneth Dill , only a few simple approximation algorithms are known, which achieve poor approximation ratios and produce rather artificial conformations.

To overcome the artificial predictions, different and more appropriate relaxed discretisation of the 3-dimensional space as well as the relaxation of the embedding constraints have to be studied. Although it is difficult to compare the approximation ratios for protein structure prediction algorithms on different lattice models, it should be mentioned that this is the best known approximation ratio for such algorithms.

Finally, the running time of both approximation algorithms are linear. Related Publications: [ 38 , 39 , 44 , 45 , 47 , 48 , 49 , 50 ]. One step in the analysis of gene expression data is the clustering of co-expressed genes. Due to the large amount of generated data and due to the fact that the data are perturbed by noise, rigid mathematical models for the analysis as well as efficient algorithms are necessary to overcome the involved problems.

We presented a sound mathematical model dealing with the large error rates and proposed two algorithms one based on probabilistic methods, the other based on spectral graph theory to solve the involved clustering problem efficiently. These algorithms recognise perturbed cluster structures assuming that the noise is identically and independently distributed.

The algorithms are completely analysed and experimental results on synthetic data as well as on real data demonstrate their capability to solve the clustering problem efficiently. An important algorithmic problem is to develop efficient genome comparison methods to find the minimum number of global mutation operations to transform one genome sequence into another and thereby characterising the evolutionary relationship between the corresponding organisms.

This approach guarantees with a high degree of certainty that existing evolutionary relationships will be discovered. This combinatorial problem is of particular interest to theoretical computer science because it has been shown to be NP-complete and therefore, in full generality, not solvable efficiently due its high inherent complexity. On the one hand this motivates the search for efficient approximation algorithms guaranteeing high quality results and short running times in real life experimental settings.

And on the other hand it justifies attempts to achieve a reduction of complexity through modifications to the underlying formal model making the biological problem accessible to computer science. We study the complexity of different versions of the genome rearrangement problem and evaluate existing formal models in terms of their computational feasibility - most of the questions arising in this context are currently open.

In the same context we aim at designing new and realistic models which should still allow algorithmic solutions. The main focus, however, is on developing efficient combinatorial methods yielding either exact or approximate results with a high guaranteed approximation quality. Visualisation of biological networks e. Such networks will usually be modeled with graph-theoretic concepts like Petri nets or attributed hyper- graphs. Due to the large amount of data, it is not useful to generate a single map representing it.

Hence, algorithmic methods are needed for organising data in different layers representing different levels of abstraction. Therefore, on the one hand, methods are sought for structuring and partitioning graphs and their underlying data, and on the other hand, methods to reveal similarities within the same layer of abstraction as well as between different levels. Provided suitable computational models of biochemical networks, our goal is to develop efficient algorithms to detect similarities for structuring given graphs as well as to find close relationships between parts of the given graph.

Although this problem is in general NP-hart subgraph isomorphism , the additional biological information stored as attributes at nodes and hyper- edges might be helpful to design efficient algorithms. On the other hand, specific models for graph similarity are required like edit-distances to quantify the similarity of graphs.

Such models are useful if the induced similarity measures can be determined efficiently. Another aspect is the investigation of fluxes or pathways within biochemical networks. Here we are interested in the decomposition of a given network or fluxes into fundamental ones under the constraints of involved products or enzymes. Such algorithms are required as a first step of analysing the function of a given biochemical network and to find possible alternative pathways of a given flux or to inhibit certain pathways.

Overall, BIM funds four junior research groups consisting of one associate professor and two scientists each. The groups are associated with the respective faculties for computer science and biology at the two universities. Here graduates will study via an individual study plan to receive an additional bachelor degree in bioinformatics within three semester terms.

At the research unit for practical computer science and bioinformatics, a junior research group Associate professor Heun Stiftungsprofessur and two scientists is funded via BIM for a five year period Related publications: [ 40 , 41 , 42 , 43 ]. Currently, BioSolveIT has created 15 jobs, mostly for bioinformatics researchers at the postdoc level.

Poster presentations: [ 6 , 27 , 36 , 56 , 68 ]. Dissertation Theses: [ 31 , 38 , 65 , 84 , 87 , 92 ]. University lecture notes: [ 41 , 43 ]. Various workshops ranging from Semi- and nonparametric Modelling to Statistics in Genetics have been organized. A large number of researchers have visited the department , various seminars have been given and research projects initiated. Methodological research at the department is usually motivated and stimulated through demand and challenge in diverse fields of applications and empirical research. Vice versa, applied research in life, economic and social sciences is often based on our own development of adequate statistical methods or on related work of colleagues in our scientific community.

This interplay between methods and applications is reflected in the structure of research activities, and the following topics are of prime applied or methodological interest. Statistical Modelling Computational Statistics Econometrics Biostatistics Statistics in business, economics and social sciences Methodological Foundations of Statistics. These research clusters enhance the formation of groups of scientists cooperating within the department. Additionally, this structure stimulates joint scientific work and research seminars, and it aims at transcending traditional organizational borders.

The following sections provide an overview on research activities in these clusters. The clusters are partially overlapping, so some research results appear in more than one topic. Classical statistical models are generally useful in situations where data are approximately Gaussian and can be explained by some linear structure. Although easy to interpret and theoretically well understood, underlying assumptions are often too restrictive in situations where data are clearly non-Gaussian or have nonlinear structure.

Driven by the demands in biological, economic and social sciences, and grown around generalized linear models, statistical modelling emerges as a broad and flexible extension of model-based statistical inference in more complex data situations, in particular with discrete and correlated data. Inference is mostly likelihood-based, including modern Bayesian approaches. We roughly distinguish three major, overlapping subclusters: Semi- and nonparametric Regression Likelihood-based Semiparametric Regression and Bayesian Semiparametric Regression , Deficient Data Missing data and Measurement error models , Time-dependent and Spatial Data Time-dependent and Spatial data.

Semiparametrically structured regression models are defined as a class of models for which the predictors may contain parametric parts, additive parts with an unspecified functional form of covariates as well as interactions between variables which are described as varying coefficients. The approaches are extremely flexible in capturing the way in which the predictor influences the dependent variable.

Research focuses on approaches which are embedded into the framework of semiparametric generalized models, allowing for response variables which are given as count data or binary variables, or metrically scaled variables of various distributional form. Development of methods includes localizing approaches as well as penalized maximum likelihood methods. Bayesian approaches for non- and semiparametric regression models have recently gained much interest.

See a Problem?

They offer some advantages, in particular: choice of smoothing or tuning parameters are an integral part of the model, and extensions to more complex situations, such as longitudinal or spatial data, are conceptually easy. We distinguish between smoothness prior approaches as a stochastic generalization of penalized likelihood methods and adaptive basis function approaches.

We have worked in both directions, but our current focus is on smoothness prior approaches. As an alternative, we developed empirical Bayes inference based on mixed model technology. To make the methods accessible for nonspecialists and to facilitate cooperation with applied researchers, we developed public-domain software, in particular BayesX, see Computational Statistics.

In practical statistics investigators often are confronted with the problem of incomplete data sets. Therefore, statistics as a field of research has to develop empirical-analytical tools to deal with this problem. In the centre of research we consider the estimation and the prediction in models of regression type ander the complication of incomplete data. This concerns missing data problems for longitudinal- and cluster data as well as linear regression models with incomplete discrete and continuous covariates.

Further research topics are semiparametric models with missing data, selection models with flexible modelling of the drop-out rate and generalized linear models with random effects and missing MNAR response. In non- and semiparametric regression model methods known from linear regression and the Nearest Neighbor Imputation were investigated. Marginal regression models, conditional models and random effects models are a possible adaption of Generalized Linear Models to dependent response and were also handled. Research cooperation depends on projects currently having problems with missing data.

A widespread problem in applying regression analysis is the presence of measurement error. Often the variables of interest cannot be observed directly or measured correctly, and one has to be satisfied with surrogates often also named indicators or proxies. So the development of adjusted estimators is indispensable to avoid deceptive conclusions. These methods have received increasing attention especially in epidemiology and econometrics.

One focus of our research is survival analysis, where we, in particular, derived an exact corrected score function for the Cox model and a general unifying approach to deal with measurement error in parametric accelerated failure time models. Further research was concerned with the comparison of different approaches with regard to efficiency, the behavior of structural estimators under misspecification, and the search for robust outlier resistant and Ridge type estimators. We also work on nonstandard measurement error problems in linear and nonlinear models, like rounding, heaping, complex error models, the superposition of Berkson and classical error, and deliberately contaminated data to guarantee anonymity.

We have applied the methods in nutritional epidemiology influence of nutrition habits on cardiovascular disease , several radiation studies, micro-econometrics and sociology unemployment duration data from the German Socio-Economic Panel and dental medicine. Wichmann, I.

Blettner and R. Kukush Kiev , E. Lesaffre Kath. In life and social sciences as well as in business and industry, the availability of data that carry temporal or spatial information is nearly exploding and creates an important and challenging topic of current international research where the Department contributes.

Faculty 16 "Mathematics, Informatics and Statistics": Research Report 1998 -- 2003

Our methodological research is mainly motivated and driven by biostatistical and economic applications, in cooperation with partners from various fields. The focus is on models and methods for longitudinal data, in particular with discrete responses, for survival and event history data, and for spatial or spatio-temporal data. The complexity of realistic models for temporal and spatial data necessitates computer-intensive data analytic methods, thus strengthening the link to Computational Statistics.

Approaches with an emphasis on economic times series and longitudinal data, in particular for data from financial markets, are contained in Econometrics. Computational statistics is a statistical science at the interplay between computer science and data analysis. The topic includes various state-of-the-art methods for statistical inference such as resampling methods e.

MCMC methodology provides enormous scope for realistic statistical modelling. Research at the department has focused on designing efficient MCMC algorithms for latent parameter in complex hierarchical models. In particular, methods for estimating latent Gaussian Markov random fields GMRFs and Bayesian P-Spline models have been developed, with strong emphasis on so-called block updating algorithms.

Such algorithms have considerably improved convergence and mixing properties. Furthermore, methods based on auxilliary variables have been investigated, which allow for block-updating via Gibbs sampling in binary and multicategorical regression problems, in contrast to Metropolis-Hastings steps based on multivariate Taylor expansions.

Finally, specific implementations of Bayesian partition models via reversible jump MCMC have been developed. The algorithms are basic building blocks for fully Bayesian inference in complex semiparametric models, see 3. Scientific statistical computing requires reliable statistical software.

We test standard statistical software packages and point out possible errors so that these errors can be eliminated in future versions of the software. Great efforts have been taken to provide public domain software for the new statistical methodology developed at the department.

The following statistical packages have been developed among others :. Bamp www. BayesX www. BayesX is able to estimate very complex semiparametric regression models with sutructured additive predictors in a Bayesian framework. BDCD www. This command-line based program allows for the estimation of unknown relative risk parameters in a typical disease mapping setting. BVCM www. The software estimates varying coefficient models in a Bayesian framework. ELV www. GraphFitI www. It is designed to fit a graphical model to a multivariate data set; it applies a data-driven selection strategy introduced by Cox and Wermuth.

S-Plus Code for multicategorical penalized spline regression. Based on the P-Splines approach the software allows for nonparametric extensions of common models for nominal and ordinal responses. For further software projects see the section about Statistical Genetics and Bioinformatics. Multivariate statistical analysis is concerned with data that consist of sets of measurements on a number of individuals or objects. The basis is the analysis of dependence between variables, between sets of variables and between variables and sets of variables.

The investigation of the structure of variables is used in prognosis and classification and for the detection of similarity of objects. Methods are often based on computer intensive methods as boosting or bootstrapping, genetic algorithms and tree-based methods. In high dimensional statistical analysis problems of dimension reduction prevail. Strongly related to multivariate methods is statistical data mining which is the process of selecting, exploring, modifying and modelling large sets of data by statistical methods to uncover previous unknown patterns.

Bump hunting has been explored as a new tool of statistical data mining for analyzing risks in finance and survival analyis. Graphical models are a useful tool for modelling complex high-dimensional association. The group has developed theoretical and computational tools in order to apply graphical models in a wide area of applications. Furthermore, extensive research has been done in applying resampling methods in the area of bioaquivalence and non-inferiority trials.

moiss y la religin monotesta tres ensayos spanish edition Manual

Econometrics combines economic theory with techniques from statistics and mathematics to model economic and financial systems. Econometric models help to better understand the economic processes we observe and, thus, to improve the design of economic systems. Specifically, econometric models play an important role in testing economic theories, in predicting future economic developments and in supporting economic policy making. Among others, it includes the analysis of regression models in the presence of heavy-tailed disturbances, testing for the presence of structural breaks, and the detection of outliers.

Compared to other economic data, financial data often behave very differently. Specifically, they can be characterized by heavy tails i. Claessen, H. Mittnik, S. The research in time series analysis addresses both, theoretical and applied issues. In addition to economic and financial applications, there are contributions in the area of medical psychology.

Haas, M. The research in empirical macroeconomics investigates the interaction of the financial sector and the real sector of an economy. A second field of investigation concerns questions in public economics. Here, we analyze the effect public spending, specifically public consumption versus public investment has an long term economic growth.

In the last two decades, Industrial Economics have been going through an empirical revolution. Researchers have applied econometric techniques beyond hypothesis testing, and as a toolbox for measurement of important industry determinants. By means of structural models, important unobservables such as demand elasticities and marginal costs can be measured. Furthermore, newly developed demand estimation techniques allow measurement of impact of policies on welfare and costs, thus econometric analysis has become a necessary step in antitrust and merger analysis. The group employs such techniques to analyze network industries, most notably telecommunications.

The research is concentrated on estimating demand in a competitive telecommunications industry, using these to measure welfare impact of liberalization, and on measuring competitiveness of industries across Europe. Other research areas are internet auctions, network interconnection, research and development with network effects, and open source software development.

Doganoglu, T. Biostatistics creates and applies methods for quantitative research in the health sciences. Common applications include clinical medicine, epidemiological studies, genetics, environmental health, ecology, forestry, and general biology. At the Department of Statistics at Munich University particular emphasis is placed on research related to epidemiology, genetics, and neuro science.

The group develops methodology for spatial and spatio-temporal, longitudinal, and survival data on chronic and infectious diseases. Recent work has focussed on the spatial and temporal analysis of cancer incidence and mortality. Furthermore, problems in infectious disease epidemiology of animals and humans are being considered. Papers on these issues in journals like "Biometrics", "Biostatistics", "Statistics in Medicine" and "Applied Statistics" have led to international recognition of the group. Paradis et al. The group actively develops a number of software package for the analysis of genetic data, that are freely available for download:.

The main interest of this group are statistical aspects of bioinformatics and computational biology. Current research focuses on the development of methods for analysing gene expression data and on probabilistic models for DNA sequence analysis. Local collaboration with experimental and theoretical groups, e. An important area of substantive research in neuroscience is functional neuroanatomy of the human brain. Brain mapping aims at detecting areas of functional activities, for example the visual cortex, based on functional magnetic resonance imaging fMRI data.

More recently, functional connectivity, that is detection and tracking of fiber bundles connecting functional areas, based on diffusion tensor imaging DTI data, has gained much interest. Because of the structure of fMRI and DTI data, statistical methods in human brain research are strongly related to the research cluster. Auer et al. This research cluster comprises primarily applied research and empirical analyses, with an emphasis on tackling and solving substantive problems in business, economics and social science.

Topics cover a broad field of applications, ranging from marketing research, industrial economics, risk management for banks and insurance companies, socio-economic development, labour market analyses, official statistics and demography to empirical sociology and psychology.

Financial econometrics is subsumed in the cluster Econometrics. Research is often carried out in joint work with partners from universities and research institutions, or in cooperation with partners in business and industry. Research questions also emerge from consulting cases handled by the Statistical Consulting Unit. The focus is on solving substantive research questions with state-of-the-art methods developed within the department or in related work.

Empirical research and practical problems in business and economics confront scientists and practitioner with increasingly large and complex data sets, requiring modern statistical tools for adequate data analysis. Areas of current major interest are: risk analyses in the credit and actuarial sector, marketing research, labour market analyses, and public health as well as socio- economic problems in developing countries. There is an intensive participation of our department of statistics in empirical research on fields of social and psychological problems. In order to derive results of relevance and reliable conclusions, statistics and probability indispensably requires rigorous and steady reflection on its methodological foundations.

Two subclusters of vivid research at the department can be distinguished: Foundations of Statistical Inference, and Interval Probability. The other points to what may be called the conceptual background and paradigms prevalent in various interpretations of probability. Foundational questions of the second sort arise at the borderline of Statistics, Probability Theory, Philosophy, and Philosophy of Science. There are several — partially conflicting — paradigms how to learn from a sample about the population, i. Currently, research in that field is centered around testing statistical hypotheses, statistical estimation theory and modelling genuinely indeterministic frames.

Developments in Philosophy of Science contribute to the understanding of probability and inference. The research group has been devoted to the development of the theory of interval-probability since more than ten years. It constitutes a system of axioms and definitions producing statements of the same fundamental rigour as the classical theory — but with a much wider area of application. Therefore it promises to be advantageous in all discipline employing descriptions of uncertainty, especially in economics — e. Beyond foundations of the theory of interval-probability, its application to statistical inference, to decision theory and to robust statistics may be emphasized as prominent subjects of engagement for the research group.

Our academic collaborators are Frank Coolen, Dept. In the publication of the first volume of the book Elementare Grundbegriffe einer allgemeineren Wahrscheinlichkeitstheorie was supported by DFG. Coolen Durham , G. Kozine Roshilde, DK , H. Rieder Bayreuth , M. Zaffalon Lugano. Aalen, O. Oslo ; Albert, P. Maryland ; Beibel, M. Freiburg ; Cheng, C. Taipeh ; Davies, L. Essen ; Edwards, D. Kopenhagen ; Fokianos, K. Bern ; Heuer, C. Heidelberg ; Hruschka, H. Regensburg ; Lang, J.

Iowa ; Nikitin, Y. Petersburg ; Rieder, H. Berlin ; Santner, T. Loughborough ; Smith, M. Sydney ; Spokoiny, V. Berlin ; Stasinopoulos, D. London ; Timmer, J. Freiburg ; Wellner, J. Washington ; Wolters, J. Berlin ; Ziegler, A. Berliner, M. Nikosia ; Gamerman, D. Rio de Janeiro ; Gasmi, S. Magdeburg ; Gelfand, A. Connecticut ; Giudici, P. Pavia ; Hart, J. Mannheim ; Hujer, R. Frankfurt ; Keiding, N.

Kopenhagen ; Kohn, R. Sydney ; Kukush, O. Kiew ; Mittag, H. Berkeley ; Pohlmeier, W. Konstanz ; Prigarin, S. Novosibirsk ; Rammelt, P. Berlin ; Richardson, S. Villejuif, Frankreich ; Santner, T. Wisconsin ; Srivastava, V. Hagen ; Viertl, R. Wien ; Wermuth, N. Mannheim ; Zwanzig, S. Booth, J. Gainsville, Florida ; Cressie, N. Wien ; Hobart, J.

Table of Contents

Florida, Gainsville ; Marx, B. North Carolina ; Opsomer, J. Iowa ; Rue, H. Trondheim ; Schaffrin, B. Bartels, R. Sydney ; Becker, C. Umea ; Coolen, F. Durham ; Dannegger, F. San Raffaele, Italien ; Gelfand, A. Helsinki ; Kauermann, G. Glasgow ; Knorr-Held, L. London ; Kukush, A. Los Banos, Philippines ; Ranta, J. Helsinki ; Schaffrin, B. Newark ; Shklyar, S. Adebayo, S. Ilorin, Nigeria ; Augustin, N. Freiburg ; Musio, M. Freiburg ; Berger, U. Glasgow ; Coolen F.

Durham , Chung, C. Taipeh ; de Cooman, G. Ghent ; Fokianos, K. Nikosia ; Friedl, H. Graz ; Gamerman, D. Rio de Janeiro ; Gieger, C. Heidelberg ; Grammig, J. Gallen ; Hafner R. Karlsruhe ; Knorr-Held, L. Lancaster ; Kukush, A. Kiew ; van der Linde, A. Bremen ; Mira, A. Mailand ; Odejar, A. Los Banos, Philippines ; Schaffrin, B.

Petersburg , Zaffalon, M. Alonso, A. Limburg ; Belenkiy, S. Bielefeld ; Carroll, R. Taipeh ; Coolen, F. Durham ; Davidov, O. Aalborg ; Kauermann, G. Bielefeld ; Mackerras, D. Alice Springs, Australia ; Mansmann, U.


  • Credit Default Swaps Und Informationsgehalt by Eva Wagner.
  • Kommunikation im Groves-Mechanismus — Ergebnisse eines Laborexperiments | SpringerLink!
  • Faculty 16 "Mathematics, Informatics and Statistics": Research Report -- .
  • Heidelberg ; Marx, B. Baton Rouge ; Molenberghs, G. Limburg ; O'Neill, P. Nottingham ; Santner, T. Glasgow ; Vardeman, S. Iowa State ; Vontheim, R. Semiparametric Bayesian Regression for Multivariate Responses. To appear in: Journal of Applied Econometrics. Auer, D. To appear in: Radiology.

    Augustin, T. In: G. Cozman, S. Moral and P. Walley eds. Gent, In: W. Gaul and H. Locarek-Junge, H. Heidelberg, On decision making under ambiguous prior and sampling information. Fine, S. Moral, T. Seidenfeld eds. Cornell University, Ithaca N. Neyman-Pearson testing under interval probability by globally least favorable pairs - Reviewing Huber-Strassen theory and extending it to general interval probability.

    Journal of Statistical Planning and Inference , Statistical Papers 43, Gaul and G. Ritter eds. Springer, Heidelberg, An exact corrected log-likelihood function for Cox's proportional hazards model. To appear in: Scandinavian Journal of Statistics. On the suboptimality of robust Bayesian procedures from the decision theoretic point of view. To appear in: J. Bernard, T. Seidenfeld, M. Zaffalon Hg.

    Carleton Scientific, Waterloo, To appear in: Journal of Statistical Planning and Inference. Cox's proportional hazards model under covariate measurement error - A review and comparison of methods. In: Van Huffel, S.

    Kluwer, Dordrecht. A bias analysis of Weibull models ander heaped data. To appear in: Statistical Papers. Becker, U. Computational Statistics , 16, Bender, S. In: B. Marx, H. Friedl Hrsg. Determinanten der Arbeitslosigkeitsdauer in Westdeutschland. In: F. Diewald, P. Krause, A. Mertens, H. Solga Hrsg. Arbeitsmarktchancen und soziale Ausgrenzungen in Deutschland. Biller, C. Shaker Verlag, Aachen. Journal of Computational and Graphical Statistics 9, Lifetime Data Analysis 6, Posterior mode estimation in dynamic generalized linear mixed models.

    Allgemeines Statistisches Archiv 85, Bayesian varying-coefficient models using adaptive regression splines. Statistical Modelling 2, Blauth, A. Interactive analysis of high-dimensional association structures with graphical models. Metrika 51, Boulesteix, A. Caputo, A. A graphical chain model derived from a model selection strategy for the sociologists graduates study. Biometrical Journal 41, Undernutrition in Benin — an analysis based on graphical models.

    Social Science and Medicine 56, Carvalho, M. Modelling discrete time survival data with random slopes: evaluating hemodialysis centres. Statistics in Medicine, 22, Cheng, C. The polynomial regression with errors in the variables. Journal of the Royal Statistical Society B 60, A small sample estimator for a polynomial regression with errors in the variables. Journal of the Royal Statistical Society B 62, On the polynomial measurement error model. In: van Huffel, S. Kluwer, Dodrecht-Boston-London.

    Chiarella, C. Time Series Data. Studies in Nonlinear Dynamics and Econometrics, 6, Issue 1. European Journal of Finance , 8, Crook, A. Measuring spatial effects in time to event data: a case study using months from angiography to coronary artery bypass graft. Statistics in Medicine 22, Dannegger, F. Tree stability diagnostics and some remedies for instability. Statistics in Medicine 19, Diggle, P. Towards on-line spatial surveillance. In: Brookmeyer, R. Oxford University Press. Didelez, V. Maximum likelihood estimation in graphical models with missing values. Biometrika 85, ML— and semiparametric estimation in logistic models with incomplete covariate data.

    Statistica Neerlandica 56, Comments on graphical models for time series. In: Green, P. Hjort and S. Richardson eds. University Press, Oxford. A comparative analysis of graphical interaction and logistic regression modelling: self-care and coping with a chronic illness in later life. Biometrical Journal 44, Mannix, Elisabeth. Merchant, Kenneth A. Meulbroek, Lisa K. Narayanan, M. Nunnally, Jam C.

    Porst, Rolf : Fragebogen. Ein Arbeitsbuch, Wiesbaden. Porter, Michael E. Rogerson, William P. Rossiter, John R. Sackmann, Sonja A. Samuelson, Paul A. Siegers, Daniel : Pay for Performance. Stein, Jeremy : Efficient capital markets, inefficient firms: A model of myopic corporate behavior, in: Quarterly Journal of Economics, Vol.

    Stone, M. Van der Stede, Wim A. Varian, Hal R. WU Download am Weick, Karl E. Wie Unternehmen aus Extremsituationen lernen, Stuttgart. Welge, Martin K. Werts, C. Would you like to be regularly informed by e-mail about our new publications in your fields of interest? Subscribe to our newsletter. Peter Lang on Facebook. Powered by PubFactory.

    User Account Sign In Not registered? Create Profile. Peter Lang. Search Close. Advanced Search Help. Show Less Open access. Ziel des Autors ist es, Impulse zur Gestaltung, ggf. Daraus wird ein Leitbild zur nachhaltigen Unternehmensplanung abgeleitet. XIX, S.

    Motivation und Executive Summary 2. Aufbau der Arbeit B. Theoretische Motivation und Stand der Forschung 1. Management Myopie 2. Nachhaltigkeit C. Forschungsdesign 1. Bisher offene Forschungsfragen 2. Integratives Forschungsdesign 3. Statistische Methodik D. Datenerhebung und Messmodell 1. Fragebogen 2. Grundgesamtheit und Vorgehensweise zur Datenerhebung 3. Deskription der Stichprobe 4. Das Messmodell E. Analyse des Modells 1. Evaluation des Strukturmodells 2. Robustheit des Strukturmodells 3.

    Zusammenfassende Betrachtung der Ergebnisse F. Gestalterische Impulse 1.