摘要 :
A spanning connectedness property is one which involves the robustexistence of a spanning subgraph which is of some special form, say aHamiltonian cycle in which a sequence of vertices appear in anarbitrarily given ordering, or a ...
展开
A spanning connectedness property is one which involves the robustexistence of a spanning subgraph which is of some special form, say aHamiltonian cycle in which a sequence of vertices appear in anarbitrarily given ordering, or a Hamiltonian path in the subgraphobtained by deleting any three vertices, or threeinternally-vertex-disjoint paths with any given endpoints such thatthe three paths meet every vertex of the graph and cover the edges ofan almost arbitrarily given linear forest of a certain fixed size.Let π=π1?πn be anordering of the vertices of an n-vertex graphG. For any positive integer k≤n-1, wecall π a k-thick Hamiltonian vertexordering of G provided it holds for alli∈{1, …, n-1} thatπiπi+1∈E(G) and thenumber of neighbors of πi among{πi+1, …,πn} is at least min(n-i,k); For any nonnegative integer k, we saythat π is a -k-thick Hamiltonian vertexordering of G provided|{i:πiπi+1?E(G),1≤i≤n-1}|≤k+1. Our main discoveryis that the existence of a thick Hamiltonian vertex ordering willguarantee that the graph has various kinds of spanning connectednessproperties and that for interval graphs, quite a few seeminglyunrelated spanning connectedness properties are equivalent to theexistence of a thick Hamiltonian vertex ordering. Due to theconnection between Hamiltonian thickness and spanning connectednessproperties, we can present several linear time algorithms forassociated problems. This paper suggests that much work in graphtheory may have a spanning version which deserves further study, andthat the Hamiltonian thickness may be a useful concept inunderstanding many spanning connectedness properties.
收起
摘要 :
Dear Editor,Large vestibular aqueduct syndrome was first described in 1978 by Valvassori and Clemis.1 Children with large vestibular aqueduct syndrome may have hearing in their childhood enabling the acquisition of normal speech a...
展开
Dear Editor,Large vestibular aqueduct syndrome was first described in 1978 by Valvassori and Clemis.1 Children with large vestibular aqueduct syndrome may have hearing in their childhood enabling the acquisition of normal speech and language development by using hearing aids. Typically, their hearing progressively deteriorates in association with an incident of minor head trauma, exercise or an upper respiratory tract infection. Surgical management including shunts and obliteration of the vestibular aqueduct has proven to be ineffective. When the hearing of these patients has progressed to the severe-to-profound level bilaterally, it has been proposed that cochlear implantation may be an effective treatment. Many reports have demonstrated that cochlear implantation is an effective method in providing hearing to children with severe-to-profound hearing loss caused by large vestibular aqueduct syndrome. However, most of these studies examined few cases and reported only short-term speech perception results in children. Therefore, the aim of this study was to report the long-term speech perception outcome of children with large vestibular aqueduct syndrome at an earlier implantation time and to compare the performance of a cochlear implant (given to the initially poorer hearing ear) to a contralateral hearing aid (better hearing ear).
收起
摘要 :
Background: Gout is the most common inflammatory arthritis, which, if left untreated or inadequately treated, will lead to joint destruction, bone erosion and disability due to the crystal deposition. Uric acid transporter 1 (URAT...
展开
Background: Gout is the most common inflammatory arthritis, which, if left untreated or inadequately treated, will lead to joint destruction, bone erosion and disability due to the crystal deposition. Uric acid transporter 1 (URAT1) was the promising therapeutic target for urate-lowering therapy. Objective: The goal of this work is to understand the structure-activity relationship (SAR) of a potent lesinurad-based hit, sodium 2-((5-bromo-4-((4-cyclopropyl-naphth-1-yl)methyl)-4H-1,2,4-triazol-3-yl)thio)acetate (1c), and based on that discover a more potent URAT1 inhibitor. Methods: The SAR of 1c was systematically explored and the in vitro URAT1 inhibitory activity of synthesized compounds 1a-1t was determined by the inhibition of URAT1-mediated [8-14C]uric acid uptake by human embryonic kidney 293 (HEK293) cells stably expressing human URAT1. Results: Twenty compounds 1a-1t were synthesized. SAR analysis was performed. Two highly active URAT1 inhibitors, sodium 2-((5-bromo-4-((4-n-propylnaphth-1-yl)methyl)-4H-1,2,4-triazol-3-yl)thio)acetate (1j) and sodium 2-((5-bromo-4-((4-bromonaphth-1-yl)methyl)-4H-1,2,4-triazol-3-yl)thio)acetate (1m), were identified, which were 78- and 76-fold more active than parent lesinurad in in vitro URAT1 inhibitory assay, respectively (IC50 values for 1j and 1m were 0.092 μM and 0.094 μM, respectively, against human URAT1 vs 7.18 μM for lesinurad). Conclusion: Two highly active URAT1 inhibitors were discovered. The SAR exploration also identified more flexible naphthyltriazolylmethane as a novel molecular skeleton that will be valuable for the design of URAT1 inhibitors, as indicated by the observation that many of the synthesized naphthyltriazolylmethane-bearing derivatives (1b-1d, 1g, 1j and 1m) showed significantly improved UART1 inhibitory activity (sub-micromolar IC50 values) as compared with lesinurad which has the rigid naphthyltriazole skeleton
收起
摘要 :
The National Centers for Environmental Prediction (NCEP) and National Center for Atmospheric Research (NCAR) have cooperated in a project (denoted "reanalysis") to produce a retroactive record of more than 50 years of global analy...
展开
The National Centers for Environmental Prediction (NCEP) and National Center for Atmospheric Research (NCAR) have cooperated in a project (denoted "reanalysis") to produce a retroactive record of more than 50 years of global analyses of atmospheric fields in support of the needs of the research and climate monitoring communities. This effort involved the recovery of land surface, ship, rawinsonde, pibal, aircraft, satellite, and other data. These data were then quality controlled and assimilated with a data assimilation system kept unchanged over the reanalysis period.
收起
摘要 :
Owing to the imminent fixed mobile convergence, Internet applications are frequently accessed through mobile devices. Given limited bandwidth and unreliable wireless channels, content delivery in mobile networks usually experience...
展开
Owing to the imminent fixed mobile convergence, Internet applications are frequently accessed through mobile devices. Given limited bandwidth and unreliable wireless channels, content delivery in mobile networks usually experiences long delay. To accelerate content delivery in mobile networks, many solutions have been proposed. In this paper, we present a comprehensive survey of most relevant research activities for content delivery acceleration in mobile networks. We first investigate the live network measurements, and identify the network obstacles that dominate the content delivery delays. Then, we classify existing content delivery acceleration solutions in mobile networks into three categories: mobile system evolution, content and network optimization, and mobile data offloading, and provide an overview of available solutions in each category. Finally, we survey the content delivery acceleration solutions tailored for web content delivery and multimedia delivery. For web content delivery acceleration, we overview existing web content delivery systems and summarize their features. For multimedia delivery acceleration, we focus on accelerating HTTP-based adaptive streaming while briefly review other multimedia delivery acceleration solutions. This paper presents a timely survey on content delivery acceleration in mobile networks, and provides a comprehensive reference for further research in this field.
收起
摘要 :
The conventional IMRT planning process involves two stages in which the first stage consists of fast but approximate idealized pencil beam dose calculations and dose optimization and the second stage consists of discretization of ...
展开
The conventional IMRT planning process involves two stages in which the first stage consists of fast but approximate idealized pencil beam dose calculations and dose optimization and the second stage consists of discretization of the intensity maps followed by intensity map segmentation and a more accurate final dose calculation corresponding to physical beam apertures. Consequently, there can be differences between the presumed dose distribution corresponding to pencil beam calculations and optimization and a more accurately computed dose distribution corresponding to beam segments that takes into account collimator-specific effects. IMRT optimization is computationally expensive and has therefore led to the use of heuristic (e.g., simulated annealing and genetic algorithms) approaches that do not encompass a global view of the solution space. We modify the traditional two-stage IMRT optimization process by augmenting the second stage via an accurate Monte Carlo-based kernel-superposition dose calculations corresponding to beam apertures combined with an exact mathematical programming-based sequential optimization approach that uses linear programming (SLP). Our approach was tested on three challenging clinical test cases with multileaf collimator constraints corresponding to two vendors. We compared our approach to the conventional IMRT planning approach, a direct-aperture approach and a segment weight optimization approach. Our results in all three cases indicate that the SLP approach outperformed the other approaches, achieving superior critical structure sparing. Convergence of our approach is also demonstrated. Finally, our approach has also been integrated with a commercial treatment planning system and may be utilized clinically.
收起
摘要 :
The conventional IMRT planning process involves two stages in which the first stage consists of fast but approximate idealized pencil beam dose calculations and dose optimization and the second stage consists of discretization of ...
展开
The conventional IMRT planning process involves two stages in which the first stage consists of fast but approximate idealized pencil beam dose calculations and dose optimization and the second stage consists of discretization of the intensity maps followed by intensity map segmentation and a more accurate final dose calculation corresponding to physical beam apertures. Consequently, there can be differences between the presumed dose distribution corresponding to pencil beam calculations and optimization and a more accurately computed dose distribution corresponding to beam segments that takes into account collimator-specific effects. IMRT optimization is computationally expensive and has therefore led to the use of heuristic (e.g., simulated annealing and genetic algorithms) approaches that do not encompass a global view of the solution space. We modify the traditional two-stage IMRT optimization process by augmenting the second stage via an accurate Monte Carlo-based kernel-superposition dose calculations corresponding to beam apertures combined with an exact mathematical programming-based sequential optimization approach that uses linear programming (SLP). Our approach was tested on three challenging clinical test cases with multileaf collimator constraints corresponding to two vendors. We compared our approach to the conventional IMRT planning approach, a direct-aperture approach and a segment weight optimization approach. Our results in all three cases indicate that the SLP approach outperformed the other approaches, achieving superior critical structure sparing. Convergence of our approach is also demonstrated. Finally, our approach has also been integrated with a commercial treatment planning system and may be utilized clinically.
收起
摘要 :
One- and two-way communication with digital compressed visual signals is now an integral part of the daily life of millions. Such commonplace use has been realized by decades of advances in visual signal compression. The design of...
展开
One- and two-way communication with digital compressed visual signals is now an integral part of the daily life of millions. Such commonplace use has been realized by decades of advances in visual signal compression. The design of effective, efficient compression and transmission strategies for visual signals may benefit from proper incorporation of human visual system (HVS) characteristics. This paper overviews psychophysics and engineering associated with the communication of visual signals. It presents a short history of advances in perceptual visual signal compression, and describes perceptual models and how they are embedded into systems for compression and transmission, both with and without current compression standards.
收起
摘要 :
Doxorubicin (DOX) is one of the most effective cytotoxic anticancer drugs and has been successfully applied in clinics to treat haematological malignancies and a broad range of solid tumours. However, the clinical applications of ...
展开
Doxorubicin (DOX) is one of the most effective cytotoxic anticancer drugs and has been successfully applied in clinics to treat haematological malignancies and a broad range of solid tumours. However, the clinical applications of DOX have long been limited due to severe dose-dependent toxicities. Recent advances in the development of DOX delivery vehicles have addressed some of the non-specific toxicity challenges associated with DOX. These DOX-loaded vehicles are designed to release DOX in cancer cells effectively by cutting off linkers between DOX and carriers response to stimuli. This article focuses on various strategies that serve as potential tools to release DOX from DOX-loaded vehicles efficiently to achieve a higher DOX concentration in tumour tissue and a lower concentration in normal tissue. With a deeper understanding of the differences between normal and tumour tissues, it might be possible to design ever more promising prodrug systems for DOX delivery and cancer therapy in the near future. ? 2017 Informa UK Limited, trading as Taylor & Francis Group.
收起
摘要 :
Recently, the increasing number of bio-safety assessments on cadmium-containing quantum dots (QDs) suggested that they could lead to detrimental effects on the central nervous system (CNS) of living organisms, but the underlying a...
展开
Recently, the increasing number of bio-safety assessments on cadmium-containing quantum dots (QDs) suggested that they could lead to detrimental effects on the central nervous system (CNS) of living organisms, but the underlying action mechanisms are still rarely reported. In this study, whole-transcriptome sequencing was performed to analyze the changes in genome-wide gene expression pattern of rat hippocampus after treatments of cadmium telluride (CdTe) QDs with two sizes to understand better the mechanisms of CdTe QDs causing toxic effects in the CNS. We identified 2095 differentially expressed genes (DEGs). Fifty-five DEGs were between the control and 2.2 nm CdTe QDs, 1180 were between the control and 3.5 nm CdTe QDs and 860 were between the two kinds of CdTe QDs. It seemed that the 3.5 nm CdTe QD exposure might elicit severe effects in the rat hippocampus than 2.2 nm CdTe QDs at the transcriptome level. After bioinformatics analysis, we found that most DEG-enriched Gene Ontology subcategories and Kyoto Encyclopedia of Genes and Genomes pathways were related with the immune system process. For example, the Gene Ontology subcategories included immune response, inflammatory response and T-cell proliferation; Kyoto Encyclopedia of Genes and Genomes pathways included NOD/Toll-like receptor signaling pathway, nuclear factor-κB signaling pathway, tumor necrosis factor signaling pathway, natural killer cell-mediated cytotoxicity and T/B-cell receptor signaling pathway. The traditional toxicological examinations confirmed the systemic immune response and CNS inflammation in rats exposed to CdTe QDs. This transcriptome analysis not only revealed the probably molecular mechanisms of CdTe QDs causing neurotoxicity, but also provided references for the further related studies. Copyright ? 2018 John Wiley & Sons, Ltd.
收起