De Aller-Bedste Bøger - over 12 mio. danske og engelske bøger
Levering: 1 - 2 hverdage

Bøger udgivet af MOHAMMED ABDUL SATTAR

Filter
Filter
Sorter efterSorter Populære
  • af Vikas
    373,95 kr.

    Cancer is a disease in which biological cells grow uncontrollably and spread to other organs of a patient. Cancer disease comprises of abnormal cells, which can develop in any part of the human body. Normally, cells grow and multiply in a controlled manner, and when these become old or damaged, then cells die i.e., known as programmed cell death or apoptosis. Sometimes this controlled process of cell growth or death breaks down and there is abnormal growth of cells which form tumors in the body. The tumor cells grow slowly and do not spread in the body are known as benign (noncancerous) tumors. When tumor cells grow rapidly, these spread throughout the body and are called malignant tumors (cancerous). While there is no one specific cause of cancer, factors such as smoking, excessive UV exposure from the sun or tanning beds, being overweight or obese, and excessive alcohol use can all contribute to the formation of cancerous cells in the body.A few emerging techniques such as immunotherapy, radio-frequency ablation, thermal ablation, cryotherapy, photodynamic therapy and plasmonic photothermal therapy, show higher efficacy and long survival rates as compared to the conventional methods. Nanoparticles (1-100 nm in size) offer a biocompatible and biodegradable platform for cancer diagnosis as a contrast agent and as a cancer treatment through nanoparticle based thermal ablation or delivery of conventional chemotherapeutic drugs. The light interaction of biological tissue provides a basis for a wide variety of biomedical applications, like non-invasive diagnosis (tumor detection, contrast microscopy, optical coherence tomography, etc.), treatment (laser surgery, photodynamic therapy, photothermal therapy, etc.), and monitoring of bio-parameters (pulse oximetry, glucose monitoring, etc.). The thermal ablation of the tumor by using light interaction of metallic nanoparticles is known as plasmonic photothermal interaction. This allows for tumor treatment with minimal side effects on nearby healthy tissue. This technique is critically governed by the optical characteristics of tissue as well as the variations in the properties during the interaction of light with the nanoparticles or tissue. For example, the efficacy of cancer treatment by plasmonic photothermal therapy mainly depends upon the light absorption by tumor cells and/or nanoparticles. Therefore, it is important to characterize the optical characteristics of tissue, especially in the presence of nanoparticles and at different stages of photothermal therapy for pre-treatment planning as well as improving the efficacy of the therapy.

  • af Anindita Mukherjee
    353,95 kr.

    Content based image retrieval (CBIR) has become a popular area of research for both computer vision and multimedia communities. It aims at organizing digital picture archives by analyzing their visual contents. CBIR techniques make use of these visual contents to retrieve in response to any particular query. Note that this differs from traditional retrieval systems based on keywords to search images. Due to widespread variations in the images of standard image databases, achieving high precision and recall for retrieval remains a challenging task. In the recent past, many CBIR algorithms have applied Bag of Visual Words (BoVW) for modeling the visual contents of images. Though BoVW has emerged as a popular image content descriptor, it has some important limitations which can in turn adversely affect the retrieval performance. Image retrieval has many applications in diverse ¿elds including healthcare, biometrics, digital libraries, historical research and many more (da Silva Torres and Falcao, 2006). In the retrieval system, two kinds of approaches are mainly followed, namely, Text-Based Image Retrieval (TBIR) and Content-Based Image Retrieval (CBIR). The former approach requires a lot of hu- man effort, and time and perception. Content based image retrieval is a technique that enables an user to extract similar images based on a query from a database containing large number of images.The basic issue in designing a CBIR system is to select the image features that best represent the image content in a database. As a part of a CBIR system, one has to apply appropriate visual content descriptors to represent these images. A query image should be represented similarly. Then, based on some measures of similarity, a set of images would be retrieved from the avail- able image database. The relevance feedback part, which incorporates inputs from a user, can be an optional block in a CBIR system. The fundamental problem in CBIR is how to transform the visual contents into distinctive features for dissimilar images, and into similar features for images that look alike. BoVW has emerged as a popular model for representing the visual content of an image in the recent past. It tries to bridge the gap between low level visual features and high-level semantic features to some extent.

  • af V. Moharir Rucha
    373,95 kr.

    Polymers are accidentally discovered while experimenting with formaldehyde and phenols and later they are turned out to be a resourceful material ever evolved in the area of polymer science. Looking at their diverse perspectives, plastics became one of the essential materials in human society. Polymeric structure is one of the strengths of plastic that has made the material strong and durable and attained the place of leading material all over. Repetitive monomeric bonds facilitate high strength and resistivity against many environmental factors and also to biological activity, thus enhancing material's life for use and storage. Plastics are used almost in every sector worldwide, where around 70% of the goods are made out of plastics underlining its demand for production. The chief attributes of plastic which made the material the strongest utilizer are light weight, durable, flexible, transparent, versatile nature of polymer providing exceptional abilities to stand out amongst the other materials. As the material is light weight, durable and non-reactive, it is mostly absorbed in food wrapping and packaging purpose. Providing many benefits, one over the other landed the material to gain the wide acceptance in the industries and markets. Being a globalized product, the material has an impact on every sector and at some places, its use is inevitable like in consumer packaging which nearly accounts for 42% of the global annual resin production. Transparency and impermeable nature of packaging films facilitates three key potentials comprising conformal packaging, protected environment and easy transport. In hot-food service applications, polystyrene is primarily adopted having the property of insulation. These resins are too handy to use, replacing many materials in packaging of goods and food amenities. Materials like wood, metals and some of the building materials are even got replaced by plastics contributing almost 19% of the global production. Apart from replacing metals and wood, plastics also replaced natural fibres such as cotton, wool and silk. Plastics are extensively used in medical applications as they offer the required qualities like resistance against microbes and single use items.

  • af Radhika Wazalwar
    378,95 kr.

    A synthetic thermosetting polymer, epoxy consists of two components the resin and the curing agent. The resin provides a sufficient number of highly reactive terminal epoxide groups, and the curing agent is responsible for bonding with the resin at these epoxide groups to form a rigid cross-linked network. Epoxy resins are versatile due to their excellent mechanical, thermal, corrosion-resistant, chemical resistant and adhesive properties. As a result, epoxy composites are widely used in structural applications. Depending on the starting materials and the synthesis method, various epoxy resin types can be synthesized. Epoxy can be categorized into two families, glycidyl and non-glycidyl epoxies, which are further classified into various types. A variety of factors drives the choice of resin. Some of them are the viscosity of uncured resin, epoxy equivalent weight, curing behavior, cross-linking density, glass transition temperature (Tg), and service performance. The epoxy equivalent weight (EEW) is the ratio of the molecular weight of the epoxy monomer to the number of epoxide groups present. It is represented in terms of g/equivalent. EEW of a resin is used to calculate the amount of hardener required to achieve optimal curing in that resin. A stoichiometric or near-stoichiometric quantity of hardener should be added to the epoxy resin to achieve a good quality cured epoxy.Uncured epoxy resins are inadequate for practical applications and therefore need to be cured using a curing agent. A suitable hardening agent can be chosen depending on the type of epoxy being used and the desired end application of the epoxy composite.

  • af Rahul Kumar Verma
    338,95 kr.

    The identity of an individual entity lies in the wholeness of the system in which it is present. We observe numerous complex phenomena happening around us, and to study them, we de¿ne them as systems with particular entities leading to the commencement of those phenomena. Modelling these complex systems gives rise to the formation of complex networks. These networks represent the meaningful connections between the entities of the complex system. "I think the next century (21st) will be the century of complexity", once said Stephen Hawking in light of the omnipresence of complex systems around us. The past two decades observed the immense potential of network science due to its holistic approach, ¿exibility, and applications to vast ¿elds of scienti¿c research. Network science has provided various models and algorithms under the umbrella of statistical physics to analyze natural and social sciences, including complex biological systems. Like any other physical system, it is also required to identify and characterize the individual building blocks in complex biological systems and obtain and establish insights into the interactions. The biological complex systems can be de¿ned by multiple types of entities such as biomolecules (proteins and genes), pathways (metabolic, anabolic, and disease), cells (neurons), tissues (brain regions), and organs (human complexome) along with their de¿ned interactions. In biological systems, interactions among cellular entities are not always straightforward as in social and physical networks. Hence, their interpretation becomes much more complicated, aided by the immense size, temporal dynamics, and non-linearity behaviour. However, the vast diversity of biological systems allows us to de¿ne them at various levels into network models.

  • af Jaipal
    353,95 kr.

    Modern society is highly dependent on electrical energy for its sustainable development. Conventionally, electricity is generated primarily through thermal power stations, hydro power stations and nuclear power stations etc. Hydro power plants use water as a source which is renewable, green (non-Polluting) and available free of cost. But commencement of large hydro large hydro power plants has certain key concerns such as large capital requirement, longer commissioning time periods, damage to ecology and environment and huge displacement of population living around. Nuclear power plants require nuclear fuel. Until operating under safe conditions do not pose any threat but there always have been serious concerns regarding safe disposal of radioactive waste, and to control radioactive exposure under accidents if any. Thermal power plants generate electricity by burning of fossil fuels (coal, gas, oil). Burning of fossil fuels has serious concerns of environment degradation on account emission of gasses. Resources of fossil fuels are limited, hence depleting at fast rate. In nutshell, it can be envisaged that, demerits of conventional generation have started outweighing their merits. To reduce dependency on conventional resources nature has bestowed the planet with renewable energy resources such as, solar energy, wind energy, geothermal energy, tidal energy, biomass etc. These resources are available in abundance and that even at free of cost. Recent advancement in the field of renewable energy technologies has made it possible to harness the renewable resources on large scale and alleviate the stress on conventional generation. As per United States, Department of Energy a microgrid is defined as: "Microgrids are localized grids that can disconnect from the traditional grid to operate autonomously. Because they are able to operate while the main grid is down, microgrids can strengthen grid resilience and help mitigate grid disturbances as well as function as a grid resource for faster system response and recovery". The Consortium for Electric Reliability Technology Solutions, CERTS, has published a 'White Book' to do a detailed exposition of microgrid [2]. CERTS defines microgrid as "an aggregation of loads and micro-source providing both power and heat." As per the various definitions of the microgrid, a microgrid is presumed to be a group of interconnected distributed energy resources and loads with precisely defined boundaries of electrical system.

  • af Sarit Chakraborty
    363,95 kr.

    Digital Microfluidic Biochips or DMFBs have emerged as a great alternative to various in-vitro diagnostic tests in recent past. These chips are expected to be closely coupled up with cyberphysical systems and with other healthcare-related systems in the near future. Hence, design optimization of such micro or nano-scale DMFB requires interdisciplinary study of computer science, electronics along with biochemistry and medical technology. Research in this new discipline of nano-biotechnology needs the integration of diverse fields such as microelectronics, biochemistry, in-vitro diagnostics, computer-aided design and optimization, fabrication technology in association with healthcare engineering methodologies. Potential applications of DMFBs include several real-life applications such as point-of-care clinical diagnostics, enzymatic analysis (e.g., glucose and lactate assays), high-throughput DNA (Deoxyribonucleic acid) sequencing, immunoassay, proteomics (study of protein structure) and environmental toxicity monitoring, water and air pollutant detection, or food processing and so on and so forth. These LoC systems provide a viable and low-cost platform for rapid, automated and accurate clinical diagnosis of various diseases including malaria, neglected tropical diseases (NTD) prevalent in the developing countries. Typically, a microfluidic-based chip implements one or more complex bioassays (bio protocols) by manipulating nanolitre or picoliter volume of fluids on a single chip of a few square centimeters in size.

  • af Bibhu Prasad Nayak
    363,95 kr.

    Electronic devices generally emit electromagnetic (EM) noise to the surroundings and are also susceptible to the surrounding fields. Electromagnetic compatibility (EMC) is the ability of the systems to function in the presence of electromagnetic environment, by reducing the unwanted field generation and reception of electromagnetic energy which may result in malfunction such as electromagnetic interference (EMT). EMC focuses on two aspects of system behavior, emission and immunity. The main challenge in the case of emission is to identify the noise source in the system and to find out the countermeasures to reduce the emissions without affecting the functionality. On the other hand, immunity is the ability of equipment to function correctly in the presence of electromagnetic noise. Based on this, systems can be classified as the emitter and receptor. Noise can propagate from the emitter to the receptor through the coupling media. Coupling can happen either through conduction by the shared power supply cables between devices or by the direct radiation between them. In board level design, active components like microcontroller, communication ICs are prone to the external noises. PCB traces in the layout provide the coupling paths to the internal or external EM noises. Using a good immunity EMC model for the ICs, an optimized layout can be designed to reduce the level of coupling that increase the overall system EMC performance. Generally, EMC testing is done once porotype is available. Any failure in the test requires design iteration such as addition of extra filter components or layout changes. This causes delay in time to market and eventually loss in revenue. Nowadays EMC simulation-based design is being practiced at concept level to avoid such losses. It is well known that project cost reduces when the EMC simulation and design is brought in the early design phase.

  • af Kapadia Harsh Khodidas
    383,95 kr.

    Concrete mix is one of the most widely used construction materials in the world. The mix includes crushed stones, gravel, and sand which are typically bound together with cement and water. The proportion of each component in concrete is determined based on the required properties for the construction work. The mix proportions are either nominal mix or design mix. The nominal mix of concrete is used for ordinary construction work such as a small residential structure. On the other hand, the design mix relies on the proportions finalized using laboratory tests which are performed to determine the compressive strength of the mixture. Additionally, there are various types of concrete like plain concrete, reinforced concrete, precast concrete, high- density concrete, ready mix concrete, decorative concrete and rapid set concrete. The varieties have grown into numbers due to the different requirements and applications, for example, ready-mix concrete is widely popular since it is a machine mix which has higher precision and large volume readily available at the construction site. Reinforced concrete is widely used for the construction of infrastructure projects like bridges, buildings, highways, dams, and power plants. Annually, billions of tons of concrete are used around the world for the construction of various types of structures. Cracking in concrete structures is one of the most important and primary indicators of a structure's health. It indicates the deterioration in the strength of the structure and warns against possible failure. Generally, physical inspections are carried out to detect defects in structures for further rectification, repair and reinforcement as and when required. The physical inspection of cracks in concrete structures of various sizes can assist in determining the overall serviceability of the structure. Image based automated or semi-automated detection of cracks has the potential to overcome the limitations of manual inspection. In alternative to physical inspection, images of critical locations of the structure can be captured by state-of-the-art image acquisition devices. The processing of images provides information about the current condition of structures. With recent advancements in computational algorithms, vision-based inspection is emerging as an efficient technique for monitoring the structural health of structures. Using the latest Artificial Intelligence (AI) based techniques, a better interpretation of collected data can be obtained automatically. Computer Vision refers to a specific field that can process images and videos to extract meaningful information. According to IBM, computer vision is the field of AI that enables computers and systems to derive meaningful information from digital images, videos and other visual inputs and take actions or make recommendations based on that information. Both categories of applications cover aspects such as the identification of structural components, characterizing local and global visible damage, acquiring, storing and communicating structural images and detecting changes from a reference image.

  • af Narinder Singh
    363,95 kr.

    The word 'composite' refers to a macroscopic combination of two or more different materials that result in a new material having much improved and new characteristics. Nanocomposites are the composites consisting of multiphase materials in which at least one of the phases lies in the nanometric size domain (Inm-I00 nm). The Hybrid Nanocomposites (HNCs) are the multi-component compounds in which at least one of the constituents (organic or inorganic) has dimensions in the nanometre range with some interaction between them. The diverse and advanced technological applications of organic and inorganic materials are restricted owing to poor conductivity, lesser stability and solubility of organic materials and complicated processability and high-temperature operation of inorganic materials. However, in HNCs, the guest-host chemistry may help to overcome their particular limitations due to synergetic effects that lead to emerging research in the area of advanced functional materials. In HNCs, the interactions at the molecular and supra-molecular level result in modulation of mechanical, optical, electrical, catalytic and electrochemical properties at interface. Owing to diverse morphologies and interfacial interactions, the HNCs show excellent and unique properties that are absent in their constituents

  • af Vini Dadiala
    353,95 kr.

    Access to electricity in this modern era has a noteworthy impact on one's life when one needs it for lighting, machines, heating, transportation, education, medicine to name a few. Absence of electricity calls for lot of problems for common man and for the industry it means a big financial loss. Increase in income also leads to increased electricity consumption as the people will then use more electrical appliances like washing machine, air conditioners, heaters, refrigerators which consume more energy. Electricity consumption in the developing nations is growing rapidly to foster their economic growth. To meet this energy demand, massive addition to the power generation capacity is needed. Power generation plays a vital role in the fiscal growth and development of a country. Power generation is the process by which industrial facilities generate electricity from primary energy sources such as coal, natural gas, nuclear, solar, or wind energy. Coal, being the world's most abundant and widely distributed fossil fuel source, plays a key role in worldwide power generation. Despite of being responsible for major percentage of the CO2 emissions, coal-fired power plants fuel roughly 41 percent of global electricity and even a higher percentage of it in some countries. Power is generated by two types of energy resources: Renewable and Non-renewable. Renewable resources such as wind, solar, water and geothermal regenerate fast. Non-renewable sources as fossil fuels and nuclear fuel are found deep in the earth. They are finite and cannot be replaced at a fast pace to meet the demand.. Renewable power, in spite of being cleaner is less acceptable due to its intermittent availability. This makes the fossil fuels the largest contributor of the electricity production.

  • af Rachna Sharma
    353,95 kr.

    Underwater wireless communication system designers have been confronted with an ever-increasing need for high-capacity and high-data-rate wireless applications for real-time image and video transmission. Acoustic wireless communication (AWC) has been preferred inside the water over a long-range communication (a few kms). However, due to having a drawback of low data rate (upto kbps) in AWC, the visible light communication (VLC) is growing attention as an attractive alternative to AWC, as VLC supports high data rate. Di¿erent sorts of light transmissions, such as horizontal, vertical, and slant, are possible. When visible light travels through an underwater channel, it encounters a variety of negative factors due to the interaction of the light wave with underwater constituents. Path loss and turbulence are two of the most critical factors that cause severe fading. Path loss quanti¿es the amount of power a signal loses as it travels through a communication channel. Underwater turbulence occurs due to the change in refractive index of the water, thereby causing random ¿uctuations in the received signal strength. This phenomenon is referred as turbulence-induced fading. The main cause of change in refractive index is variation of temperature, eddy particles and pressure inside water. Since surface of the sea is directly imposed with sun rays and due to which the temperature of the surface water is higher as compared to the temperature of deeper dark water. Visible Light Communication refers to optical wireless communication systems that operate in the visible band (390 ¿ 750 nm). LEDs or lasers are used in VLC systems because they can be pulsed at very high speeds without äecting the lighting output or the human eye. The use of LEDs is a sustainable and energy- e¿cient strategy for both illumination and communication. The properties of sea water are transparent to blue and green light (450 nm to 530 nm) and exhibit low attenuation. Underwater VLC is able to provide data rate up to Gbps in a real-time environment. In the Underwater VLC system, spatial diversity has proven to be an e¿ective and widely used technique to alleviate the e¿ects of fading and improve the performance of communication systems. In spatial diversity, several parallel communication links are formed between transmitter and receiver by employing multiple transmit and/or receive branches (MIMO systems).

  • af Zaid Kamal Madni
    363,95 kr.

    Seeds are the result of sexual reproduction in plants. They are important for the replication of plants during successive seasons. Seeds have vast biological and economic importance to mankind. They are rich in high content of proteins, starch, carbohydrates, oil reserves, fiber, vitamin E and monounsaturated fats. These reserves help in the early stages of growth and development into a plant and also enable to be used as major food sources. Structural analyses of proteins with naturally bound ligands purified from the native source often provide insights toward their physiological functions. The structure of vicilin from Solanum melongena was determined and two similar ligand binding pockets were found to trap two different ligands, i.e., acetate and pyroglutamate, which were suggested to play roles in the metabolic processes. A homolog of albumin 2 from Lathyrus sativus (LS24) was characterized and the structure was determined. It exists in dimeric form and reveals a hemopexin type fold. The binding of this protein with spermine implicates its role in polyamine biosynthesis. While the spermine binding stabilizes the dimer, the interaction of this protein with heme resulted in monomer formation. Mutually exclusive binding of heme and spermine in different oligomeric states suggested a role for LS-24 in sensing oxidative stress through a ligand-regulated monomer-dimer transition switch. Two surface-bound features for which the electron density could be prominently defined were observed on the surface of the S. melongena nsLTP-I structure and were modeled as lipids. Legume lectins are the most comprehensively studied group of lectins and have been linked to many pathological and physiological processes. They have been extensively used as immunohistochemistry markers for cancer diagnosis and prognosis and also in cell profiling.

  • af Samik Datta
    373,95 kr.

    Sentiment analyses are widely utilized to recognize the character of the end users and play an essential task in monitoring the user's review. In sentiment analysis, opinion mining is utilized to understand the opinion presented in written language or text. Reviewing the usage of different household objects generate more complexities in e-commerce applications and among service providers. Here, the object presented as movie text, special symbols, and emoticons and dealing with the unstructured data became highly complicated. In the Aspect Based Sentiment Analysis , two kinds of tasks are executed. The procedure to detect the attributes in the object where the people are commenting is called aspect category. In this phase, the object attributes are termed as aspects and aspect value or sentiment identification is performed as the next task with the aspects. Customer reactions are understood quickly in sentiment analysis, and they face more complications in analyzing human languages. NLP is the field connected with computers for processing human languages such as French, English, German etc. It became more essential to design a new model for professionals who are highly close to humans based on their applications and usage. It is complex to allocate different things to the machine, and the dependencies must be addressed. The processes of human textual data processing are the essential field where machines are trained to observe and process the knowledge of data content. These types of observation need a multi-disciplinary technique, and also, the process of naturally attained text is offered to logic, search, machine learning, knowledge representation, planning and statistical technique. In the present internet era, large volumes of text are presented in the form of power-point presentations, word pages and PDF pages. In this case, the programs are needed to generate some sense with the textual documents, and also, they need different NLP approaches. Finally, the search identifies the best optimization technique for the computer. In some cases, the selection is required for processing the data, and also, the search techniques find the good possible solution to obtain the optimal best solution. Moreover, logic is essential to perform effective interference and reasoning. Next, the textual data are modified as logical forms into a machine for processing. Based on knowledge presentation, the embedded knowledge is collected according to machine knowledge. In NLP, the communication procedure is improved regarding the sentence, meaning, phrases, words and syntactic processing that are more essential for NLP.

  • af Subhash Gautam
    353,95 kr.

    The need for innovative materials grows exponentially as technology advances. In the recent past, many sophisticated materials have been developed by the research community. Metal matrix composites (MMCs) and superalloys are examples of such novel materials. Traditional machining techniques have dominated the machining of various metals and alloys, but they have proven ineffectual in shaping advanced materials. As a result, people have devised and are still developing newer machining methods called as advanced machining processes (AMPs) to address the challenges. These processes make use of cutting-edge energy sources such as thermal, mechanical, chemical, and electrochemical energy. In traditional machining processes, material is removed through shear and brittle fracture, but in AMPs, material is removed through melting and vaporisation, chemical action, electrochemical action, or brittle fracture. Electrical discharge machining (EDM) is a widely used AMP that has found a position in today's industrial and research paradigms. EDM is a subtractive manufacturing technology that uses the thermal aspect of a spark to remove material from a workpiece. This spark occurs between the electrodes (tool and workpiece), which are both totally immersed in a dielectric fluid. Thermal energy is used in a controlled manner to develop the workpiece's required features. The schematic of an EDM system includes a workpiece and tool electrode, a pulsed power supply system, a servo mechanism, and a dielectric and dielectric supply system. The power supply generates high-frequency pulsed voltage. The servo mechanism keeps the gap/gap voltage between the electrodes at the desired level. Both the job and the tool are entirely immersed in dielectric medium, and a pulsed voltage is applied across the electrodes. The high electric field created between the tool and the workpiece liberates free electrons from the cathode, which flow towards the anode and clash with the dielectric along the way. Ionization of the dielectric occurs, resulting in the formation of a plasma channel. In EDM, significant input process parameters are current, voltage, pulse-on time, and pulse-off time. The high pressure in the plasma channel prevents effective molten material ejection. During the pulse-off period, the plasma channel collapses, a shock wave forms, and the melted material is removed by flowing dielectric..

  • af Lovely Ranjita
    383,95 kr.

    The scientific and technological developments in the field of materials science and engineering have taken multiple strides in the social growth of mankind in the past century. It is the result of these innovations that in the world of today, materials extensively dominate several sectors like structural engineering, transportation engineering, aerospace engineering, communication engineering, electrochemical and health of biomedical engineering. Surrounded by the vast array of materials, ionic materials in the past few decades have added a new dimension for the growth of science and technology termed as Polymer Electrolytes (PEs). It is the result of these advances that in the world of today, polymer electrolytes are dominating in ionic materials. Continuous efforts have been devoted to develop newer ion conducting materials and their distinctive properties have brought to vanguard their applications for electrochemical devices such as electrochromic windows, supercapacitors, high energy density batteries, micro/nano electrochemical sensors etc. but the ion conducting polymers called polymer electrolytes are receiving remarkable attention because of their immense potential applications. Although, liquid electrolytes (LEs) show excellent electrochemical performance but suffer from limitations resulting them to be conveniently replaced with PEs such as low operating temperature range, liquid oozing, internal short circuiting and the encapsulation of a liquid is difficult which restricts the shape and size of electrochemical devices. On the other hand, solid polymer electrolytes (SPEs) are stable at higher temperatures but they are limited by their conformality and have relatively lower conductivity. Currently, there is no such electrolyte that meets all the qualifications but gel polymer electrolytes (GPEs) are attractive candidates for such a system. The fundamentals of GPEs with respect to conductivity and cation solvation are the important parameters for the development of various electrochemical devices.

  • af Prasanna B. P.
    383,95 kr.

    Polymers are extended chain giant organic molecules which consists of repeated interlinking of many monomer units in long chain there by given its name poly, meaning 'many' and mer, meaning 'part' in Greek. A polymer is similar to a necklace made from numerous tiny beads joining together known as monomers. The nonconducting properties of most of the polymers signify a substantial advantage for various practical applications of plastics. However, organic polymers with good electrical conductivity have been observed during last two decades. The polymeric materials have good processability, less specific weight, corrosion resistance and also the exciting prospects for plastics fabricated in to films, electronic devices and electrical wires. Because of these properties, in recent years they have grabbed the attention of both academic researchers and industrial domains ranging from solid state physics to chemistry and to electrochemistry. Conducting polymers are the class of polymers which can conduct electricity due to its T-electron system. Sometimes these conducting polymers are also called organic polymeric conductors or conjugated polymers or purely conductive polymers. The existence of alternate single and double bonds between the carbon atoms leading to formation of sigma (a) and pi (T) bonds is known as conjugation. Due to the formation of covalent bonds between the carbon atoms the a-electrons are fixed and immobile, while the remaining T-electrons are easily delocalized upon doping. Thus, an extended T system along the conducting polymer backbone confers the electronic conduction due to the movement of T-electrons along the chain. Ever since the invention of iodine doped conductive polyacetylene, a new field of conducting polymers, which is also called as "synthetic metals", with a number of different conducting polymers and their derivatives have been established.

  • af R. Manjunath
    383,95 kr.

    Multimedia is nothing but Analysis of variety of multimedia data to extract patterns based on statistical relationship. It's a melody vocal by accord with multi-channel and multi-model bits of information construction. Its crucial role is to notify, educate and-or entertain every one. Multimedia is pervade, exciting and involving means of info edutainment with multiple facet and extended approbation. Multimedia data commonly use in the field of TSE (information science, and engineering), geography, modern biology, medicine, weather forecast, biometrics, manufacturing, digital libraries and retailing, journalism, art-entertainment, social-sciences and space learning. Multimedia or Interactive media database structure incorporates a sight and sound records supervision framework which handles and gives base to putting away, extricating and controlling mixed media information from mixed media information. Multimedia-data contains structured information and un-structured information, for example, text, audio, graphs, images, video and media. Multimedia data extracting is a sub-field of DM which is use to discover fascinating data of hidden understanding from multimedia data. Multimedia DM is a form of data-mining. Data- mining algorithms use to segment data to categorize helpful patterns and to forecast. Regardless of the achievement in many areas, data-mining demanding task.

  • af S. Manigandan
    373,95 kr.

    A free jet defined as a rapid stream driven by pressure issuing from the nozzle exit which exhibits excellent characteristics of flow field along the width to axial distance (X/D). Where X is the axial distance and D is the diameter of the nozzle exit. The extensive understanding of flow and mixing characteristics of supersonic jet has been gaining the importance in recent years owing to its vast application in many areas like rockets, aircraft engines, and nozzle for different application etc. Over few decades the fundamentals understanding of jets and the factors affecting jet spread had been studied extensively using techniques like analytical, experimental and numerical. The effectiveness and efficiency of the jet is determined using parameters like jet spread rate, potential core length and jet decay In addition to the above shear layer also defines the efficiency of free jet. Free jet is pressure driven rapid stream which issuing either from nozzle or orifice into a quiescent ambience without any obstruction, it can also be defined using the shear layer. Shear layer is the boundary layer where the interaction between the jet and atmosphere takes place due to the exchange of momentum. The magnitude of the shear layer not only depends on the shape of exit but also the mach number of the jet]. The value of the shear layer increases with respect to the jet Mach number. Hence understanding the physics of shear layer during the evolution of jet is important. On the other hand, the characteristics of jet are also influenced by the shape of nozzle, exit profile, thickness, velocity profile and orientation of jet. Due to the velocity difference between the entraining flow and ambient boundary layer is formed at jet periphery. This boundary layer is called as shear layer. Usually shear layer is a highly unstable due to the jet spread in downstream direction. Which lead to formation of vortices. Secondly, these vortices enhance the mixing characteristics of free jet. The jet is classified into free jet, confined jet, isothermal and non-isothermal jet. A free jet is a rapid stream which discharges into ambient fluid. If the jet is influenced with reverse flow then it is called confined jet. If the jet is attached to the surface called attached jet. Based on the effect of temperature the jet is categorized as isothermal and non-isothermal.

  • af Mrinal Kanti Sen
    363,95 kr.

    A natural disaster cannot be anticipated in any kind of form by humankind. These may occur in the form of floods, seismic, volcanos, and hurricanes. The menace of these occurrences cannot be stopped at will, but a robust framework can be created by applying the concept of resilience. Resilience means the strength of a community to resist the effect of any hazard and bounces back to that community's desired level of performance after the occurrence of the risk. Resilience is defined as a system's ability to withstand and recover from the effects due to natural or human-made hazards. In any community, the meaning of resilience can be taken as the time taken by the socio-physical infrastructure to bounce back to its original or functional state. The concept of resilience is well established in various domains, such as ecology, finance, engineering, and medical science. Infrastructure resilience mainly depends on four key factors for a particular, where systems' robustness is the ability to resist the effects of a disaster, redundancy is the availability of alternative resources ensuring operational requirements during/after a disaster, rapidity is the time taken to bounce back to its original/desirable position and resourcefulness is the availability of resources for recovery.Any system's performance loss and recovery profile are typically uncertain in nature, primarily due to the inherent uncertainty in natural hazards (related to system damage) and the time variation in the restoration process due to resource availability. Performance loss mainly depends on both robustness and redundancy. In most cases, recovery is modelled as a linear or stepped profile for critical infrastructure systems instead of nonlinear. The recovery profile mainly depends on the type of infrastructure system under consideration and resource availability. For instance, a stepped recovery pattern is typically followed in restoring road and bridge systems.

  • af Suhas Krishna Diwase
    373,95 kr.

    Disaster is a serious disruption of the functioning of a community or a society involving widespread human, material, economic or environmental losses and impacts, which exceeds the ability of the affected community or society to cope using its own resources.1 Global data shows that the natural disaster events have increased in the past 100 years from less than 10 events per year to about 400 events per year. During 2005 to 2014, Asia-Pacific region witnessed 1,625 reported disaster events in which approximately 500,000 people lost their lives, around 1.4 billion people were affected and there was $523 billion worth of economic damage.3 The population at risk in the Asia-Pacific region is very high as reportedly around 740 million city dwellers in this region are living in multi-hazard hotspots that are vulnerable to floods, earthquakes, cyclones and landslides. According to Global Assessment Report on Disaster Risk Reduction 2015, the cost of disasters worldwide has reached an average of$250 billion to $300 billion every year. Climate change is expected to impact societies and increase their vulnerabilities to various hydro meteorological disasters which would have a disastrous impact on developing countries. Although it is impossible to eliminate disaster risk, its impact can be minimized by planning, preparing and building capacities for mitigation, coupled with prompt action. Traditionally, disaster management consisted primarily of reactive mechanisms wherein response was the main focus area instead of a more comprehensive approach involving participation from communities and all other stakeholders on a regular basis. However, the past few years have witnessed a gradual shift towards a more proactive, mitigation-based approach wherein the damage caused by any disaster can be minimized largely by developing early warning systems, careful planning, and prompt action.

  • af Jaya Mabel Rani A
    363,95 kr.

    Today data has grown more and more all around the world in tremendous way. According to Statista, the total amount of data has grown has forecasted globally by 2021 as 79 zettabytes. Using this data, data analyst can analyse, visualize and construct the pattern based on end users requirements. For analyzing and visualization the data, here in need of more fundamental techniques for understanding types of data sets, size and frequency of data set to take proper decision. There are different types of data such as relational data base, could be data warehouse database, transactional data, multimedia data, spatial data, WWW data, time series data, heterogeneous data, text data. There are more and more number of data mining techniques including pattern recognition, and machine learning algorithms. This book focused on data clustering technique, which is one of the sub part of machine learning. Clustering is one of the Unsupervised Machine Learning technique used for statistical data analysis in many fields, which is one of the sub branch of data mining. There are two main sub branches such as supervised machine learning and unsupervised machine learning under data mining. All classification methods including Rule based classification, Decision Tree (DT) classification, Random forest classification, support vector machine, etc., and linear regression based learning are come under Supervised Learning. Then all clustering algorithms such as K-Means (KM), K-Harmonic Means (KHM), Fuzzy clustering, Hybrid clustering, Optimization based clustering association based mining etc., are come under unsupervised clustering. Clustering algorithms can also be categorized into different types such as, traditional clustering algorithms such as, hierarchical clustering algorithms, grid based clustering, partitioning-based clustering, density based clustering. There are wide variety of clustering algorithms to cluster the data point into a set of disjoint classes. After clustering of the data all related data objects come under one group of data and different or dissimilar data objects come under another cluster of data. Clustering algorithms can be applied in most of the fields such as medical, engineering, financial forecasting, education, business, commerce, and so on. Clustering Algorithms can also use in Data Science to analyse more complicated problems and to get more valuable insights from the data.

  • af Roshan Jose
    358,95 kr.

    Dielectric materials are electrically insulating materials, which do not contain any free charge carriers for electric conduction. These materials can store charge and the molecules of the dielectric material gets polarized in the presence of an electric ¿eld. The displacement of the positive and negative charges with respect to their equilibrium positions due to the applied electric ¿eld is called polarization. The extent of polarization can be quanti¿ed by electric dipole moment. There are two types of dielectric materials polar and non-polar dielectrics. The polar dielectric materials possess a permanent dipole moment with an asymmetric crystal structure. The non-polar dielectric materials do not have a permanent dipole moment, and their crystal structure is symmetric. Based on the symmetry elements (centre of symmetry, an axis of rotation, mirror plane and their combinations) crystals are divided into 32-point groups. Out of 32 crystal class, 21 does not have the centre of symmetry, i.e. the centre of positive and negative charges does not coincide. The remaining 11 class possess a centre of symmetry and hence cannot possess polar characterization. Out of the 21-point groups, one class (432) lacks symmetry, because it has an extra symmetry element that prevents the polar characterization. One or more polar axes are present in the remaining 20-point groups and they exhibit various polar effects such as piezoelectricity, pyroelectricity and ferroelectricity. The term pyroelectricity refers to the temperature dependence of the magnitude of polarization. Some dielectric materials exhibit spontaneous electric polarization without the application of the electric ¿eld. Such materials are known as ferroelectric materials. The phenomenon is referred to as ferroelectricity. This phenomenon was ¿rst observed in Rochelle salt by J. Valasek in 1921. Ferroelectric material having a non-centrosymmetric crystal structure and containing a unique polar axis. It contains electric dipoles that are spontaneously polarized, which can be reversed by the application of a ¿eld in the opposite direction. The variation of polarization with the electric ¿eld is not linear for such materials, but forms a closed loop called a hysteresis loop. The reversible spontaneous polarization of these materials is utilized for the development of non-volatile ferroelectric random access memory (FeRAM).

  • af G. Muthulakshmi
    373,95 kr.

    The environmental pollution and energy crisis forecast a scarcity of fuel, rising global temperatures, and the defeat of biodiversity. The exhaustion of fossil fuel reserves and rapidly rising global warming has heightened public awareness of the necessity to stage out the fossil fuel industry. Consequently, the rapid increase in energy production from renewable sources has compelled the development of new generation energy storage systems because renewable sources cannot produce energy on demand. Renewable resources are virtually infinite in terms of duration, but they are limited in terms of energy available per unit of time, such as solar cells, wind turbines, solar thermal collectors, and geothermal power, they are promising sources of energy with a lesser carbon footprint. As a result, energy storage devices are critical for maximizing the use of renewable energy and to eliminate the carbon foot print from the environment. Moreover, the energy storage device is inevitable for the future of e-Vehicles, consumer electronics and the transmission and distribution of electricity. Energy storage is essential to the energy security of today's energy networks. Most of the energies are stored in the form of raw or refined hydrocarbons, whether in the form of coal heaps or oil and gas reserves. These energy sources are creating more environmental and degradation due to their emissions of greenhouse gases and heat. The only exception is a pumped hydroelectric plant, which can provide a large amount of energy in a short period of time while also improving electric system reliability. The purpose and form of energy storage are likely to change significantly as energy systems evolve to use low-carbon technology. Perhaps two broad trends will drive this change. Initially, with the intermittent nuclear power and static production playing an important role in supplying electricity and it became difficult to match with demand, while imbalances will grow and dominate over the time. Moving away from fossil fuel production means that, with the exception of flexible gas generation, most power sources can no longer be stored as hydrocarbons. Likewise, if low-carbon emission electrical supply replaces the oil and gas for domestic and industrial power needs, the structure of electricity demand will change dramatically.

  • af Viswanathan K
    353,95 kr.

    The concept of nanotechnology was initially proposed by the famed American Scientist Richard P. Feynman in 1959, during his seminar address, There's plenty of room at the bottom of the American Physical Society at Caltech. His dream report was accomplished in less than a half-century thanks to the tireless work and considerable contributions of scientists all across the world.The discovery of carbon nanoclusters known as buck ball molecules, followed by the creation of carbon nanotubes, sparked the true nanoscience revolution in the mid-1980s. These breakthroughs shine a global spotlight on these one-of-a-kind nanoscale materials. Since then, various studies have explored and evaluated in depth the basic and technological significance of several innovative nanostructure materials. The benefits of these materials might be exhibited in a variety of current electronic gadgets, bio-medical applications, consumer items, food, and agriculture.The phenomenon's nanoscience on a nanoscale scale. Atoms are just a few tenths of a nanometer in size in spherical materials, while particles are often many nanometers in size. A nanometer is a miraculous point on the dimension scale where the smallest man-made object meets the molecules and atoms are nano size, which appears to represent 10-9m. Nanoparticles are manufactured using biogenic roots, which are better than chemical roots in many aspects. This type of biological enzymatic method of nanoparticles is generally echo-friendly. Because of the bacterial carter matrix, the nanoparticles created by the bioorganic enzymatic process will have more catalytic activity, surface area, and contact between metal ions and enzymes. Chemical sensors, electronics, health-medicine, transportation, energy, and the environment are some of the latent uses of nanotechnology. The greater surface area, self-assembly, and quantum processes are thought to be responsible for the special features of nanomaterials. At the nanoscale, particularly at the lower end, quantum effects can start to predominate the behaviour of matter and have an impact on how materials behave optically, electrically, and magnetically. The importance of nanomaterials is found in their potential to revolutionize a wide range of industries through the supply of enhanced performance properties, including electronics, energy, and medicine. Nanoscale products and processes can be made more easily thanks to nanomaterials.

  • af Ansgar Mary N
    363,95 kr.

    In this internet world, keeping information and data safe and secure on computers and other storage devices is a major challenge. Cryptography is a powerful tool for safeguarding sensitive data while it's stored on a hard drive being sent over an insecure network connection. Cryptography is used in various applications of technologically advanced societies, such as the security of ATM cards, electronic commerce and computer passwords. The goals of cryptography are data integrity, confidentiality, authentication and non-repudiation. Cryptography involves both encryption and decryption, maintaining the confidentiality of the keys. Another factor to consider is the strength of the key, which determines how difficult it is to break and retrieve the key. Deoxyribonucleic Acid (DNA) cryptography is a new and innovative branch of information security that can be used in integration with traditional cryptographic approaches to improve security. Cryptography is the science of hiding the meaning of a message by putting it in a secret or code language. It is a technique for preventing unwanted people from accessing data. The encryption technique and key are the two most important components of cryptography. It is used in data communication for sending and receiving messages in a secret form while travelling through the network. Cryptography is the area that uses arithmetic and logical operations to create powerful encryption methods to secure data and communication over the internet. In today's online business and e-commerce world, confidentiality, availability and integrity of stored and transferred data are crucial. Cryptography is used to protect the data from the third party as well as for user authentication. It is the art of utilising an encryption key to encode secret information in an illegible, concealed format. Decryption with the same secret key retrieves the data in its original form at the receiver end. The encrypted data and the ability to decrypt it are only available to the person who has the secret key. The most important components of any cryptography process are plain data, secret key, encryption algorithm, cipher data and decryption algorithm.

  • af Abhijit Boruah
    373,95 kr.

    Anthropomorphic prosthetic hands must resemble human hands concerning their kinematic abilities and significant features such as object recognition on grasp. Implementation of object recognition by prosthetic hands require prior information on the extrinsic structural properties of objects as well as the hand's position and orientation. Object recognition approaches are primarily of two categories: vision and tactile. Vision-based learning methods have dominated the realm of object recognition for robotic and prosthetic hands in the past decades. Relying only on vision is not sufficient for the perceptual requirements of a prosthetic hand. The human hand is a complex structure with multiple degrees of freedom (DoF), leading to various movements and grasp formations. Acquiring the full range of motion in the fingers and the wrist during prosthetic hand development is critical. Creating such dexterity involves an intense investigation to extract knowledge of motion and joint constraints in the phalanges and wrist bones. The increase of digital information in the current age has elevated the demands of semantically rich annotations for applications shared over the internet. In recent years, the popularity of philosophical knowledge representation methods like ontology to under­ stand and utilize relevant domain concepts for problem specifications has escalated. Ontologies are significant in providing a meaningful schema by linking unstructured data. Object recognition during a grasp is an essential attribute in a prosthetic hand, which takes its development closer to its natural counterpart.

  • af Podapati Gopi Krishna
    383,95 kr.

    Tropical cyclones are cyclonic systems that occur over warm ocean waters in tropical areas and have outer circulations that can stretch more than 1000 kilometers from the storm centre. Every year, over 80 tropical cyclones form over the tropical oceans, posing a serious hazard to people and property in many regions of the world. Almost every year, these abrupt, unpredictable, and ferocious storms cause enormous havoc along the coasts and on the islands that they pass through. The typical life span of a tropical cyclone is 6 to 9 days, but it can range anywhere from a few hours to three to four weeks. Tropical cyclones are well-known for their destructive nature, and they are the deadliest of all-natural catastrophes in terms of human and property damage. Around 80-100 cyclones strike the world each year. Except for the South Atlantic and Southeast Pacific, tropical cyclones originate over ocean basins at lower latitudes in all oceans. Low sea surface temperatures in the South Atlantic and Southeast Pacific basins make cyclogenesis difficult. The North Indian Ocean (NIO) is more prone to the creation of storms in Indian areas. The Indian Ocean (both north and south of the equator) is home to around a quarter of the world cyclones. Tropical storm activity increases worldwide in late summer when the temperature differential between altitude and SST is at its largest. Each basin, on the other hand, has its own seasonal cycles. In different parts of the world, tropical cyclones are referred to by different names. They are known as Hurricanes in the Atlantic and Eastern Pacific, and Typhoons in the Western Pacific. They are known as Tropical cyclones in the Indian Ocean. The winds of a tropical storm spin anticlockwise in the Northern hemisphere and clockwise in the Southern hemisphere. Tropical cyclones are restricted to a few places and seasons, mostly in the western reaches of the large tropical oceans. Tropical cyclones attain tropical storm intensity (34 knots or more) per year. Approximately 80% of tropical cyclones develop in the ITCZ (Inter Tropical Convergence Zone) or poles ward. Tropical cyclone development can differ from basin to basin due to differences in terrain, geology, oceanography, and large-scale flow patterns

  • af Pavan D Gramapurohit
    363,95 kr.

    The red planet in the night sky has always been attracting human attention from ancient times of visual observations with the naked eye to the modern era of spacecraft measurements. The Martian atmosphere is characterized by variations in its atmospheric composition and thermal structure, as observed across its lower and upper regions. The formation of the ionosphere along with the vertical profiles of ions and electrons, sets the plasma environment of Mars. Mars is about half the size of Earth by diameter and the acceleration due to gravity on Mars is 40% lesser than on Earth. As a result, the escape velocity on Mars is nearly half that on Earth. The smaller escape velocity on Mars makes it easier for its atmospheric gases to escape from the planet's gravity. In addition, while Earth possesses a strong dipole magnetic field, such an intrinsic magnetic field is absent for Mars. However, it possesses spatially asymmetric inhomogeneous magnetic fields. It is believed that dynamo activity in Mars' core, which produces the intrinsic magnetic field, shut off ~4.1 Gyr ago and the magnetic anomalies seen currently are the remnant fields induced by the crustal rocks during the active phase of the Martian internal dynamo. More on the crustal magnetic anomalies will be discussed in section 1.5. Due to the lack of an intrinsic magnetic field, Mars does not have a global protective shield and hence the solar wind interacts directly with its atmosphere. The smaller gravity and direct interaction with the solar wind together result in more and more escape of gases from the Martian atmosphere.Similar to Earth, altitudinal variation of the neutral atmospheric temperature on Mars results in the layered structure of its atmosphere. The vertical structure of the Mars's atmosphere can be broadly divided into three distinct regions: troposphere, mesosphere, and thermosphere. The troposphere extends from the surface up to ~60 km and has a lapse rate of ~2.5 K/km. Dust particles suspended in the atmosphere absorb the solar radiation and act as an additional heat source at these altitudes, resulting in a lower lapse rate than the dry adiabatic lapse rate. Temperature variation in the troposphere is controlled majorly by solar radiation. Large-scale circulation systems majorly control the stabilization of the heat sources. It is important to note that, unlike Earth, Mars does not have a stratosphere.

  • af Josalin Jemima J
    353,95 kr.

    Economic development is impacted significantly by conventional energy sources, which are hazardous to humans and the environment. To meet the energy demand and reduce greenhouse gas emissions, the world is shifting towards alternate renewable energy sources. Photovoltaics (PV) is the most common distributed energy source for microgrid formation and one of the world's top renewable energy sources because of their modular design, minimal operational noise, and ease of maintenance. Solar photovoltaic systems, which are photovoltaic panels that turn sunlight into electricity, are one of the most common renewable energy sources. PV production is strongly dependent on solar irradiation, temperature, and other weather conditions. Predicting solar irradiance implies predicting solar power generation one or more steps ahead of time. Prediction increases photovoltaic system development and operation while providing numerous economic benefits to energy suppliers. There are numerous applications that employ prediction to improve power grid operation and planning, with the appropriate time-resolution of the forecast. Stability and regulation necessitate knowledge of solar irradiation over the following few seconds. Reserve management and load following require knowledge of solar irradiation for the next several minutes or hours. To function properly, scheduling and unit commitment requires knowledge about the next few days of solar irradiation. It is crucial to precisely measure solar irradiation since the major issue with solar energy is that it fluctuates because of its variability. Grid operators can control the demand and supply of power and construct the best solar PV plant with the help of accurate and reliable solar irradiance predictions.Electric utilities must generate enough energy to balance supply and demand. The electric sector has consequently focused on Solar PV forecasting to assist its management system, which is crucial for the growth of additional power generation, such as microgrids. Forecasting solar irradiance has always been important to renewable energy generation since solar energy generation is location and time-specific. When the estimated solar generation is available, the grid will function more consistently in unpredictable situations since solar energy generates some quantity of power every day of the year, even on cloudy days.