Bøger udgivet af now publishers Inc
-
1.063,95 kr. Since the seminal work of Andrey Kolmogorov in the early 1940's, imaging through atmospheric turbulence has grown from a pure scientific pursuit to an important subject across a multitude of civilian, space-mission, and national security applications. Fueled by the recent advancement of deep learning, the field is further experiencing a new wave of momentum. However, for these deep learning methods to perform well, new efforts are needed to build faster and more accurate computational models while at the same time maximizing the performance of image reconstruction. The goal of this book is to present the basic concepts of turbulence physics while accomplishing the goal of image reconstruction. Starting with an exploration of optical modeling and computational imaging in Chapter 1, the book continues to Chapter 2, discussing the essential optical foundations required for the subsequent chapters. Chapter 3 introduces a statistical model elucidating atmospheric conditions and the propagation of waves through it. The practical implementation of the Zernike-based simulation is discussed in Chapter 4, paving the way for the machine learning solutions to reconstruction in Chapter 5. In this concluding chapter, classical and contemporary trends in turbulence mitigation are discussed, providing readers with a comprehensive understanding of the field's evolution and a sense of its direction. The book is written primarily for image processing engineers, computer vision scientists, and engineering students who are interested in the field of atmospheric turbulence, statistical optics, and image processing. The book can be used as a graduate text, or advanced topic classes for undergraduates.
- Bog
- 1.063,95 kr.
-
1.063,95 kr. Ultra-Reliable Low-Latency Communications (URLLC) was introduced into 5G networks to facilitate machine to machine communication for such applications as the Internet of Things. But designing URLLC systems, with disjointed treatment of the topic in the literature, has proven challenging. In this work, the authors present a comprehensive coverage of the URLLC including the motivation, theory, practical enablers and future evolution. The unified level of details provides a balanced coverage between its fundamental communication- and information theoretic background and its practical enablers, including 5G system design aspects. The authors conclude by offering an outlook on URLLC evolution in the sixth-generation (6G) era towards dependable and resilient wireless communications. This is the first book to give the reader a complete, yet concise, introduction to the theoretical and application oriented aspects of a topic at the core of both 5G and 6G wireless communication systems. As such, it is essential reading for designers and students of such systems.
- Bog
- 1.063,95 kr.
-
648,95 kr. This book provides a detailed overview of possible applications of distributed optimization in power systems. Centralized algorithms are widely used for optimization and control in power system applications. These algorithms require all the measurements and data to be accumulated at a central location and hence suffer from single-point-of-failure. Additionally, these algorithms lack scalability in the number of sensors and actuators, especially with the increasing integration of distributed energy resources (DERs). As the power system becomes a confluence of a diverse set of decision-making entities with a multitude of objectives, the preservation of privacy and operation of the system with limited information has been a growing concern. Distributed optimization techniques solve these challenges while also ensuring resilient computational solutions for the power system operation in the presence of both natural and man-made adversaries. There are numerous commonly-used distributed optimization approaches, and a comprehensive classification of these is discussed and detailed in this work. All of these algorithms have displayed efficient identification of global optimum solutions for convex continuous distributed optimization problems. The algorithms discussed in the literature thus far are predominantly used to manage continuous state variables, however, the inclusion of integer variables in the decision support is needed for specific power system problems. The mixed integer programming (MIP) problem arises in a power system operation and control due to tap changing transformers, capacitors and switches. There are numerous global optimization techniques for MIPs. Whilst most are able to solve NP-hard convexified MIP problems centrally, they are time consuming and do not scale well for large scale distributed problems. Decomposition and a solution approach of distributed coordination can help to resolve the scalability issue. Despite the fact that a large body of work on the centralized solution methods for convexified MIP problems already exists, the literature on distributed MIPs is relatively limited. The distributed optimization algorithms applied in power networks to solve MIPs are included in this book. Machine Learning (ML) based solutions can help to get faster convergence for distributed optimization or can replace optimization techniques depending on the problem. Finally, a summary and path forward are provided, and the advancement needed in distributed optimization for the power grid is also presented.
- Bog
- 648,95 kr.
-
698,95 kr. Blockchains are meant to provide an append-only sequence (ledger) of transactions. Security commonly relies on a consensus protocol in which forks in the sequence are either prevented completely or are exponentially unlikely to last more than a few blocks. This monograph proposes the design of algorithms and a system to achieve high performance (a few seconds from the time of initiation for transactions to enter the blockchain), the absence of forks, and a very low energy cost (a per transaction cost that is a factor of a billion or more less than bitcoin). The foundational component of this setup is a group of satellites whose blockchain protocol code can be verified and burned into read-only memory. Because such satellites can perhaps be destroyed but cannot be captured (unlike even fortified terrestrial servers), a reasonable assumption is that the blockchain protocol code in the satellites may fail to make progress either permanently or intermittently but will not be traitorous. A second component of this setup is a group of terrestrial sites whose job is to broadcast information about blocks and to summarize the blockchain ledger. These can be individuals who are eager to get a fee for service. Even if many of these behave traitorously (against their interests as fee-collectors), a small number of honest ones is sufficient to ensure safety and liveness. The third component of this setup is a Mission Control entity which will act very occasionally to assign roles to terrestrial sites and time slots to satellites. These assignments will be multi-signed using the digital signatures of a widely distributed group of human governors. Given these components and these reasonable assumptions, the protocol described in this monograph, called Bounce, will achieve ledger functionality for arbitrarily sized blocks at under five seconds per block and at negligible energy cost. This monograph will discuss the overall architecture and algorithms of such a system, the assumptions it makes, and the guarantees it gives.
- Bog
- 698,95 kr.
-
808,95 kr. A Complete Framework for Model-Free Difference-in-Differences Estimation proposes a complete framework for data-driven difference-in-differences analysis with covariates, in particular nonparametric estimation and testing. The authors start with simultaneously choosing confounders and a scale of the outcome along identification conditions. They estimate first heterogeneous treatment effects stratified along the covariates, then the average effect(s) for the treated. This provides the asymptotic and finite sample behavior of our estimators and tests, bootstrap procedures for their standard errors and p-values, and an automatic bandwidth choice. The pertinence of these methods is shown with a study of the impact of the Deferred Action for Childhood Arrivals program on educational outcomes for non-citizen immigrants in the US.
- Bog
- 808,95 kr.
-
968,95 kr. Dynamic information-flow control (IFC) is a principled approach to protecting the confidentiality and integrity of data in software systems. This tutorial provides a complete and homogeneous account of the latest advances in fine- and coarse-grained dynamic information-flow control security. Written for students, practitioners and researchers, the authors first introduce both fine- and coarse-grained IFC in a gentle and accessible way, laying the groundwork for subsequent chapters. They proceed to show that, contrary to common belief, the granularity of the tracking system is not a fundamental feature of IFC systems and hence does not restrict how precise or permissive dynamic IFC systems can be. To achieve this, the authors demonstrate practical examples of both Fine to Coarse-Grained and Coarse- to Fine-Grained Program Translation. This tutorial will give readers the insights required to understand, develop and implement dynamic information-flow control to improve the security of a wide variety of software systems.
- Bog
- 968,95 kr.
-
838,95 kr. In the past decade of marketing scholarship, researchers have begun to examine the promise of AI technology to address practical problems through a consumer lens. Artificial Intelligence in Marketing and Consumer Behavior Research reviews the state of the art of behavioral consumer research involving AI-human interactions and divides the literature into two primary areas based on whether the reported effects are instantiations of consumers displaying a positive or negative response to encounters with AI. This monograph aims to contribute to the literature by integrating the growing body of AI research in marketing and consumer psychology. In doing so, the authors focus on the burgeoning yet less examined behavioral studies conducted in marketing and consumer behavior. They also identify the theories and process mechanisms that explain the reported effects. Artificial Intelligence in Marketing and Consumer Behavior Research proceeds as follows. Section 1 examines the history of AI research in marketing. Section 2 reviews and categorizes the decision contexts explored to date in this literature, while identifying the key theoretical constructs explored in these contexts. Section 3 provides an overview of moderators that have been demonstrated to alter the effects of AI-related consumption. Section 4 examines psychological processes that underlie consumer responses to and decisions involving AI. Section 5 provides the stimuli and manipulations employed in this research to date, while also suggesting a taxonomy of AI agents to guide future research designs. Section 6 offers future research directions for behavioral AI research in marketing.
- Bog
- 838,95 kr.
-
753,95 kr. The modern era is witnessing a revolution in the ability to scale computations to massively large data sets. A key breakthrough in scalability was the introduction of fast and easy-to-use distributed programming models such as the Massively Parallel Model of Computation (MPC) framework (also known as MapReduce). The framework describes algorithmic tools that have been developed to leverage the unique features of the MPC framework. These tools were chosen for their broad applicability, as they can serve as building blocks to design new algorithms. In this monograph the authors describe in detail certain tools available in the framework that are generally applicable and can be used as building blocks to design algorithms in the area. These include Partitioning and Coresets, sample and prune, dynamic programming, round compression, and lower bounds. This monograph provides the reader with an accessible introduction to the most important tools of a framework used for the design of new algorithms deployed in systems using massively parallel computation.
- Bog
- 753,95 kr.
-
1.063,95 kr. Formal methods refer to rigorous, mathematical approaches to system development and have played a key role in establishing the correctness of safety-critical systems. The main building blocks of formal methods are models and specifications, which are analogous to behaviors and requirements in system design and give us the means to verify and synthesize system behaviors with formal guarantees. In this monograph the authors review the current state of the art of applications of formal methods in the autonomous systems domain. They first consider correct-by-construction synthesis under various formulations in known environments before addressing the concept of uncertainty with systems that employ learning using formal methods including overcoming some limitations of such systems. Finally, they examine the synthesis of systems with monitoring to ensure a system can return to normalcy. They conclude with future directions for formal methods in reinforcement learning, uncertainty, privacy, explainability of formal methods, and regulation and certification. Covering important topics such as synthesis and reinforcement learning it is a comprehensive resource for students, practitioners and researchers on the use of formal methods in modern systems.
- Bog
- 1.063,95 kr.
-
1.063,95 kr. Recent years have seen an unprecedented level of technological uptake and engagement by the mainstream. From deepfakes for memes to recommendation systems for commerce, machine learning (ML) has become a regular fixture in society. This ongoing transition from purely academic confines to the general public is not smooth as the public does not have the extensive expertise in data science required to fully exploit the capabilities of ML. As automated machine learning (AutoML) systems continue to progress in both sophistication and performance, it becomes important to understand the 'how' and 'why' of human-computer interaction (HCI) within these frameworks. This is necessary for optimal system design and leveraging advanced data-processing capabilities to support decision-making involving humans. It is also key to identifying the opportunities and risks presented by ever-increasing levels of machine autonomy. In this monograph, the authors focus on the following questions: (i) What does HCI currently look like for state-of-the-art AutoML algorithms? (ii) Do the expectations of HCI within AutoML frameworks vary for different types of users and stakeholders? (iii) How can HCI be managed so that AutoML solutions acquire human trust and broad acceptance? (iv) As AutoML systems become more autonomous and capable of learning from complex open-ended environments, will the fundamental nature of HCI evolve? To consider these questions, the authors project existing literature in HCI into the space of AutoML and review topics such as user-interface design, human-bias mitigation, and trust in artificial intelligence (AI). Additionally, to rigorously gauge the future of HCI, they contemplate how AutoML may manifest in effectively open-ended environments. Ultimately, this review serves to identify key research directions aimed at better facilitating the roles and modes of human interactions with both current and future AutoML systems.
- Bog
- 1.063,95 kr.
-
543,95 kr. Financial markets are undergoing an unprecedented transformation. Technological advances have brought major improvements to the operations of financial services. While these advances promote improved accessibility and convenience, traditional finance shortcomings like lack of transparency and moral hazard continue to plague centralized platforms, imposing societal costs. The advent of distributed ledger technologies presents an opportunity to alleviate some of the issues raised by centralized financial platforms, regardless of their integration of financial technology enhancements. These technologies have the potential to further disrupt the financial service industry by facilitating the transition to a decentralized trading environment, also referred to as decentralized finance (DeFi). DeFi enables the provision of services such as exchanges, lending, derivatives trading, and insurance without the need for a centralized intermediary. This monograph provides an overview of the DeFi ecosystem, with a focus on exchanges, lending protocols, and the decentralized governance structure in place. The monograph also discusses the operational risks inherent in the design of smart contracts and the DeFi ecosystem, and it concludes with remarks and directions for future research.
- Bog
- 543,95 kr.
-
753,95 kr. Intervention-based research (IBR) is a research method where scholars closely interact with practicing managers in understanding and solving complex problems with the ultimate goal of generating novel theoretical insights. IBR calls for researchers to be actively involved in the problem-solving process and is particularly relevant and promising for operations management (OM) scholars, whose mission is to engage with practice to provide working solutions to operational problems. This is echoed in the rising interest among OM scholars for the application of IBR yet researchers may struggle to find complete guidelines for designing and executing IBR projects. This monograph provides doctoral students and OM scholars with an overview of this novel research method. The authors make the case for the need for IBR, discuss its relation with engaged scholarship, and compare it with other commonly used research methods. They clarify the epistemological underpinnings of IBR by discussing how it supports abductive reasoning in theory building, and by exploring what is needed for IBR to yield theoretical insights. Furthermore, they outline the process that must be followed by researchers when conducting IBR, present strategies that can reduce uncertainty and risks during their engagement, and illustrate some of the best practices that can lead to stronger engagement with the problem. The authors also present recently published IBR papers in OM and use these papers to help the reader grasp concrete examples of the fundamental methodological tenets of IBR. The monograph concludes by synthesizing the threefold benefits of IBR of solving a problem from the field, generating theoretical insights, and educating aspiring managers on the problem and its solution.
- Bog
- 753,95 kr.
-
648,95 kr. Leveraging Online Search Data as a Source of Marketing Insights is written with two main audiences in mind. For practitioners, it offers a guide on how best to utilize platforms such as Google Trends and extract actionable insights for a wide array of business decisions illustrated with real-world example applications. For academics, it provides a literature review and a framework that integrates the various avenues through which online search data can be leveraged in scientific research. The monograph starts with a brief tutorial of Google Trends and Google Ads Keyword Planner, two popular platforms for gathering online search trends and volume data, respectively. It also briefly discusses Baidu Index as an alternative to Google Trends for insights about the Chinese market, where Baidu is the dominant search engine. The next section offers a review of the literature that has utilized online search data. First, it surveys research that has treated aggregate online search interests as either concurrent or leading indicators of real-world phenomena. Second, it examines research that has treated online search data as response variables that can help measure and improve marketing effectiveness in terms of both immediate and longer-term impacts. Third, it reviews research that has treated patterns of online searches as unvarnished reflections of the public psyche, uncovering what people really think, feel, and intend to do, insights that may otherwise be difficult to ascertain based on what people post on social media or tell market researchers in surveys. The authors conclude by highlighting several promising areas for future research where online search data can serve as a big-data supplement to traditional market research.
- Bog
- 648,95 kr.
-
1.063,95 kr. Firm Social Capital and the Innovation Process conceptualizes, measures, and evaluates social capital as a productive input for innovative firms. First, a theoretical production function is conceptualized that considers social capital as an input into the production of three important innovation outputs: receipt of developmental (i.e., late-stage) funding; commercialization of an innovation realized through sales of a new product, service, or process; and growth-related activity of the firm developing the innovation, such as an initial public offering, formation of a spin-off firm, a firm sale or merger, a joint venture, or a product licensing agreement. Second, measures of social capital for innovative firms are developed based on the structural and content dimensions of relationships cultivated internally by and externally to the firm. Through internal collaboration and engagements with external parties, social trust and reciprocity are built that promote the sharing of ideas and innovation. Third, social capital as an input into the production of innovation outputs is evaluated using a unique dataset comprising survey responses to a Federal small business award program-the U.S. Small Business Innovation Research (SBIR) Program-that supports early-stage funding needs of firms developing an innovation. The dataset contains questions that provide insight into a firm's innovation output and its social capital, such as the nature and degree of engagements with third parties, as well as the accomplishments associated with the firm's internal collaborative activities. The empirical results presented in this monograph suggest that social capital may have significant importance in the production of innovative outcomes. The key contributions of this monograph include development of a theoretical production model that includes social capital, the measurement of a firm's social capital as an input into production, and the quantification, empirically, of social capital as a productive input for innovative firms.
- Bog
- 1.063,95 kr.
-
1.008,95 kr. Signal processing traditionally relies on classical statistical modeling techniques. Such model-based methods utilize mathematical formulations that represent the underlying physics, prior information and additional domain knowledge. Simple classical models are useful but sensitive to inaccuracies and may lead to poor performance when real systems display complex or dynamic behavior. More recently, deep learning approaches that use highly parametric deep neural networks (DNNs) are becoming increasingly popular. Deep learning systems do not rely on mathematical modeling, and learn their mapping from data, which allows them to operate in complex environments. However, they lack the interpretability and reliability of model-based methods, typically require large training sets to obtain good performance, and tend to be computationally complex. Model-based signal processing methods and data-centric deep learning each have their pros and cons. These paradigms can be characterized as edges of a continuous spectrum varying in specificity and parameterization. The methodologies that lie in the middle ground of this spectrum, thus integrating model-based signal processing with deep learning, are referred to as model-based deep learning, and are the focus here. This monograph provides a tutorial style presentation of model-based deep learning methodologies. These are families of algorithms that combine principled mathematical models with data-driven systems to benefit from the advantages of both approaches. Such model-based deep learning methods exploit both partial domain knowledge, via mathematical structures designed for specific problems, as well as learning from limited data. The monograph includes running signal processing examples, in super-resolution, tracking of dynamic systems, and array processing. It is shown how they are expressed using the provided characterization and specialized in each of the detailed methodologies. The aim is to facilitate the design and study of future systems at the intersection of signal processing and machine learning that incorporate the advantages of both domains. The source code of the numerical examples are available and reproducible as Python notebooks.
- Bog
- 1.008,95 kr.
-
1.078,95 kr. - Bog
- 1.078,95 kr.
-
753,95 kr. Rethinking State Capitalism: A Cross-Disciplinary Perspective on the State's Role in the Economy aims to foster cross-fertilization and thinking across social science disciplines about a new theory of the state in the economy. The authors review the state capitalism literature from various disciplines to shed light on the theorization of the state in socioeconomics by focusing on different themes. The study is structured into sections each presenting one perspective of the state in relation to the economy (history, political economy, finance, public administration, economics). The study sets out the state of the art regarding the theme in question and identifies blind spots and gaps in the respective research area, concluding with suggestions for future research directions.
- Bog
- 753,95 kr.
-
1.078,95 kr. Data structures are the means by which software programs store and retrieve data. This monograph focuses on key-value data structures, which are widely used for data-intensive applications thanks to the versatility of the key-value data model. Key-value data structures constitute the core of any data-driven system. They provide the means to store, search, and modify data residing at various levels of the storage and memory hierarchy. Designing efficient data structures for given workloads has long been a focus of research and practice in both academia and industry. Data Structures for Data-Intensive Applications explains the space of data structure design choices, how to select the appropriate data structure depending on the goals and workload of an application at hand, and how the ever-evolving hardware and data properties require innovations in data structure design. The overarching goal is to help the reader both select the best existing data structures and design and build new ones.
- Bog
- 1.078,95 kr.
-
968,95 kr. Perspectives of Neurodiverse Participants in Interactive Information Retrieval offers a survey of work to date to inform how interactions in information retrieval systems could afford inclusion of users who are neurodiverse. The existing work is positioned within a range of philosophies, frameworks, and epistemologies that frame the importance of including neurodiverse users in all stages of research and development of Interactive Information Retrieval (IIR) systems. The authors motivate the need for inclusive IIR to develop more broadly, and detail why assistive technologies are not sufficient to ensure inclusive access to information. This survey is motivated by the authors' desire to help transform IIR so that neurodiverse users can both inspire future research and benefit from innovation. This work is presented to students, researchers and practitioners in IIR with a view to provide both knowledge and inspiration towards more inclusive IIR approaches and systems. To that end, the survey contains an overview of each of the chapters to guide the reader to parts of the survey that may be most relevant to the work they are undertaking. The authors also offer examples and practical approaches to include neurodiverse users in IIR research, and explore the challenges ahead in the field.
- Bog
- 968,95 kr.
-
1.238,95 kr. This third and final part of the INFINITECH book series begins by providing a definition for FinTech, namely: the use of technology to underpin the delivery of financial services. The book further discusses why FinTech is the focus of industry nowadays, as the waves of digitization, and financial technology (FinTech) and insurance technology (InsuranceTech) are rapidly transforming the financial and insurance services industry. The book also introduces technology assets that follow the Reference Architecture (RA) for BigData, IoT and AI applications. Moreover, the series of assets includes the domain area where applications from the INFINITECH innovation project and the concept of innovation for financial sector are described. Further described is the INFINITECH Marketplace and its components including details of available assets, as well as a description of solutions developed in INFINITECH
- Bog
- 1.238,95 kr.
-
913,95 kr. Entrepreneurial Ecosystem Mechanisms structures and synthesizes the field of entrepreneurial ecosystem studies with a focus on the empirical evidence of the underlying causal mechanisms. First the authors define some key academic 'tools' underpinning the analysis: concept, framework, model, theory, and mechanisms. Next, they identify, categorize, and organize the factors deemed most relevant to understanding entrepreneurial ecosystems: a framework. This framework provides the foundations for a model where the specific functional relationships among particular variables or indicators are hypothesized to operate in some well-defined set of conditions. Five causal mechanisms are conceptualized that are grounded in earlier work including (1) interdependencies between ecosystem elements, (2) the link between entrepreneurial ecosystems and entrepreneurial outputs, (3) wider socio-economic development, (4) downward causation, and (5) links and flows of ideas, people, and resources between different entrepreneurial ecosystems. This systematic literature review synthesizes empirical studies on the causal relationships among the ecosystem elements and how they are linked to outputs and outcomes. The goal is to develop a comprehensive and mechanism-based understanding of the entrepreneurial ecosystem concept, how it can contribute to entrepreneurship and economic development policy, and our wider understanding of the contextual nature of entrepreneurship. While recent reviews of the entrepreneurial ecosystem literature have sought to bring together this rapidly shifting field, this monograph adds to these works in two key ways - first, by embracing a broad literature covering the entirety of the entrepreneurial ecosystem concept rather than specialties such as ecosystems in emerging economies or specific domains, and second, by drawing on this literature to identify the empirical evidence for the five casual mechanisms linking the contexts in which entrepreneurship takes place with specific outcomes such as firm growth, innovation, and increases in overall welfare. The authors discuss the implications of the results in light of existing research agendas as opposed to developing a new one with the aim of the review to synthesize existing work. This is crucial for the credibility of the entrepreneurial ecosystem concept and its future within academic research and policy and business practice more broadly.
- Bog
- 913,95 kr.
-
863,95 kr. A stream of research has begun to examine other forms of entrepreneurship, one of them being niche entrepreneurship with hidden champions being a prime example of these. Many economies are powered by rather small and medium sizes companies and some of these companies have become world-market leaders in highly specific niche markets and had been termed "hidden champions". Hidden champions are small to midsized companies that are often technology leaders within their niche market yet they are not commonly known to the public given their specialized markets. They are often located in rural areas and serve B2B markets so that they are only known by customers and the local community. The Evolution of Hidden Champions as Niche Entrepreneurs portrays hidden champions as an important vehicle in shaping a country's technological and economic competitiveness, revealing their key business strategies and investigating their evolution over space (geographical distribution) and time (technological change). Finally, the authors illustrate the subsequent growth in academic research which show that economies differ in providing hidden champions with an institutional environment that complementarily fits their individual strategic choices. The second part of this work analyses the evolution of hidden champions over time. The monograph starts by introducing a fundamental concept of hidden champions. Section 2 defines the concept and distinguishes it from similar concepts. Section 3 presents the key characteristics of hidden champions and provides empirical evidence from research on hidden champions. Section 4 presents the evolution of hidden champions, both geographically and historically. Section 5 outlines the development of the hidden champions' research field and its main milestones. Section 6 then presents the key contemporary discussions around hidden champions. Section 7 concludes and provides an outlook into future research.
- Bog
- 863,95 kr.
-
1.063,95 kr. Reinforcement learning agents have demonstrated remarkable achievements in simulated environments. Data efficiency, however, significantly impedes carrying this success over to real environments. The design of data-efficient agents that address this problem calls for a deeper understanding of information acquisition and representation. This tutorial offers a framework that can guide associated agent design decisions. This framework is inspired in part by concepts from information theory that has grappled with data efficiency for many years in the design of communication systems. In this tutorial, the authors shed light on questions of what information to seek, how to seek that information, and what information to retain. To illustrate the concepts, they design simple agents that build on them and present computational results that highlight data efficiency. This book will be of interest to students and researchers working in reinforcement learning and information theorists wishing to apply their knowledge in a practical way to reinforcement learning problems.
- Bog
- 1.063,95 kr.
-
1.063,95 kr. This monograph contains eight thought-leading contributions on various topics related to supply chain finance and risk management: "Disruption Mitigation and Pricing Flexibility" by Oben Ceryan and Florian Lücker. "Optimal Newsvendor IRM with Downside Risk" by Paolo Guiotto and Andrea Roncoroni. "Competitive Forward and Spot Trading Under Yield Uncertainty" by Lusheng Shao, Derui Wang, and Xiaole Wu. "The Impact of Commodity Price Uncertainty on the Economic Value of Waste-to-Energy Conversion in Agricultural Processing" by Bin Li, Onur Boyabatl¿, and Buket Avc¿. "Corporate Renewable Procurement" by Selvaprabu Nadarajah. "Blockchain-Based Digital Payment Obligations for Trade Finance" by Jing Hou, Burak Kazaz, and Fasheng Xu. "Long-term Service Agreement in Power Systems" by Panos Kouvelis, Hirofumi Matsuo, Yixuan Xiao, and Quan Yuan. "The Bullwhip Effect in Servicized Manufacturers" by Jiang Shenyang, Jiang Zhibin, Niu Yimeng, and Wu Jing.
- Bog
- 1.063,95 kr.
-
1.238,95 kr. In this first part of the INFINITECH book series, which is a series of three books, the principles of the modern economy that lead to make the modern financial sector and the FinTech's the most disruptive areas in today's global economy are discussed. INFINITECH envision many opportunities emerging for activating new channels of innovation in the local and global scale while at the same time catapult opportunities for more disruptive user-centric services. At the same time, INFINITECH is the result of a sharing vision from a representative global group of experts, providing a common vision and identifying impacts in the financial and insurance sectors
- Bog
- 1.238,95 kr.
-
1.238,95 kr. In this second part of the INFINITECH book series, which is a series of three books, the basic concepts for FinTech referring to the diversity in the use of technology to underpin the delivery of financial services are reviewed. The demand and the supply side in 16 of the most representatives scenarios in the financial sector are demonstrated, and further discussed is why FinTech nowadays is considered the most trendy topic in the Financial Industry, being responsible for waves of services digitization. Financial technology (FinTech) and insurance technology (InsuranceTech) are rapidly transforming the financial and insurance services industry. An overview of Reference Architecture (RA) for BigData, IoT and AI applications in the financial and insurance sectors (INFINITECH-RA) are also provided. Moreover, the book reviews the concept of innovation and its application in INFINITECH, and innovative technologies provided by the project for financial sector practical examples
- Bog
- 1.238,95 kr.
-
1.063,95 kr. The rapid spread of infectious diseases and online rumors share similarities in terms of their speed, scale, and patterns of contagion. Although these two phenomena have historically been studied separately, the COVID-19 pandemic has highlighted the devastating consequences that simultaneous crises of epidemics and misinformation can have on the world. Soon after the outbreak of COVID-19, the World Health Organization launched a campaign against the COVID-19 Infodemic, which refers to the dissemination of pandemic-related false information online that causes widespread panic and hinders recovery efforts. Undoubtedly, nothing spreads faster than fear. Networks serve as a crucial platform for viral spreading, as the actions of highly influential users can quickly render others susceptible to the same. The potential for contagion in epidemics and rumors hinges on the initial source, underscoring the need for rapid and efficient digital contact tracing algorithms to identify super-spreaders or Patient Zero. Similarly, detecting and removing rumor mongers is essential for preventing the proliferation of harmful information in online social networks. Identifying the source of large-scale contagions requires solving complex optimization problems on expansive graphs. Accurate source identification and understanding the dynamic spreading process requires a comprehensive understanding of surveillance in massive networks, including topological structures and spreading veracity. Ultimately, the efficacy of algorithms for digital contact tracing and rumor source detection relies on this understanding. This monograph provides an overview of the mathematical theories and computational algorithm design for contagion source detection in large networks. By leveraging network centrality as a tool for statistical inference, we can accurately identify the source of contagions, trace their spread, and predict future trajectories. This approach provides fundamental insights into surveillance capability and asymptotic behavior of contagion spreading in networks. Mathematical theory and computational algorithms are vital to understanding contagion dynamics, improving surveillance capabilities, and developing effective strategies to prevent the spread of infectious diseases and misinformation.
- Bog
- 1.063,95 kr.
-
1.023,95 kr. The Theory of Auditing Economics provides a review of existing auditing economic theory, highlights the limitations of the literature, and discusses possible future research. Specifically, the objective of this monograph is two-fold: to provide researchers who are interested in analytical auditing with an overview of the existing literature, and to explain auditing theory to researchers who employ other methodologies, such as archival or experimental, and help guide their research.
- Bog
- 1.023,95 kr.
-
968,95 kr. Probabilistic amplitude shaping (PAS) is a practical architecture for combining non-uniform distributions on higher-order constellations with off-the-shelf forward error correction (FEC) codes. This efficient and practical technique has had a large industrial impact, in particular in fiber-optic communications. This monograph details the practical considerations that led to the invention of PAS and provides an information-theoretic assessment of the PAS architecture. Because of the separation into a shaping layer and an FEC layer, the theoretic analysis of PAS requires new tools. On the shaping layer, the cost penalty and rate loss of finite length DMs is analyzed. On the FEC layer, achievable FEC rates are derived. Using mismatched decoding, achievable rates are studied for several decoding metrics of practical importance. Combining the findings, it is shown that PAS with linear codes is capacity-achieving on a certain class of discrete input channels. Written by one of the inventors of Probabilistic Amplitude Shaping, this monograph contains a wealth of insights into the theory and practical implementation of the technique in modern communication systems. This is a must read for engineers and students of communications.
- Bog
- 968,95 kr.
-
1.063,95 kr. Optimization is a ubiquitous modeling tool and is often deployed in settings which repeatedly solve similar instances of the same problem. Amortized optimization methods use learning to predict the solutions to problems in these settings, exploiting the shared structure between similar problem instances. These methods have been crucial in variational inference and reinforcement learning and are capable of solving optimization problems many orders of magnitudes times faster than traditional optimization methods that do not use amortization. In this tutorial, the author presents an introduction to the amortized optimization foundations behind these advancements and overviews their applications in variational inference, sparse coding, gradient-based meta-learning, control, reinforcement learning, convex optimization, optimal transport, and deep equilibrium networks. Of practical use for the reader, is the source code accompanying the Implementation and Software Examples chapter. This tutorial provides the reader with a complete source for understanding the theory behind and implementing amortized optimization in many machine learning applications. It will be of interest to students and practitioners alike.
- Bog
- 1.063,95 kr.