This paper formulates a deep framework, cognizant of consistency, to rectify the inconsistencies in HIU grouping and labeling. The framework's structure includes three elements: a backbone CNN for image feature extraction, a factor graph network implicitly learning higher-order consistencies amongst labeling and grouping variables, and a consistency-aware reasoning module for explicitly enforcing these consistencies. Inspired by our key observation about the consistency-aware reasoning bias's integration potential within an energy function, or a specific loss function, the last module is designed for the minimization of this function, ultimately delivering consistent predictions. An efficient method for mean-field inference is introduced, thereby permitting the end-to-end training of all modules within our network. In the experimental phase, the interplay of the two proposed consistency-learning modules was observed to enhance performance significantly, culminating in leading results on the three HIU benchmarks. The proposed approach's efficacy in detecting human-object interactions is further confirmed by experiments.
The tactile sensations rendered by mid-air haptic technology include, but are not limited to, points, lines, shapes, and textures. Haptic displays of escalating complexity are necessary for such endeavors. Furthermore, tactile illusions have displayed a strong impact in advancing the development of contact and wearable haptic displays. This article explores the apparent tactile motion illusion to showcase haptic directional lines in mid-air, paving the way for the representation of shapes and icons. A psychophysical investigation, alongside two pilot studies, assesses direction recognition capabilities of a dynamic tactile pointer (DTP) versus an apparent tactile pointer (ATP). In pursuit of this goal, we pinpoint the ideal duration and direction specifications for both DTP and ATP mid-air haptic lines and explore the ramifications of our observations regarding haptic feedback design and the complexity of the devices.
Artificial neural networks (ANNs) have recently demonstrated effectiveness and promise in identifying steady-state visual evoked potential (SSVEP) targets. Yet, they commonly contain many trainable parameters, hence necessitating a substantial amount of calibration data, which presents a significant impediment owing to the cost-intensive EEG collection process. Our goal in this paper is to engineer a compact network that avoids overfitting in artificial neural networks, specifically for individual SSVEP recognition tasks.
Incorporating previously acquired knowledge of SSVEP recognition tasks, this study meticulously crafts an attentional neural network. Leveraging the model's high interpretability via the attention mechanism, the attention layer adapts conventional spatial filtering algorithms to an ANN architecture, decreasing the number of connections between layers. SSVEP signal models and the common weights shared by the stimuli are used to establish design constraints, resulting in a reduction of the trainable parameters.
A simulation study on two widely-used datasets confirmed that the proposed compact ANN structure, constrained as suggested, eliminates redundant parameters. In comparison to established deep neural network (DNN) and correlation analysis (CA) recognition methods, the proposed approach significantly reduces trainable parameters by over 90% and 80%, respectively, while enhancing individual recognition accuracy by at least 57% and 7%, respectively.
Prior task knowledge, when integrated into the ANN, can lead to increased effectiveness and efficiency. The proposed ANN's compact form, owing to its reduced trainable parameters, translates to diminished calibration requirements while yielding exceptional performance in individual subject SSVEP recognition.
Previous task insights, when integrated into the ANN, can significantly increase its effectiveness and efficiency. The compact structure of the proposed ANN, featuring fewer trainable parameters, necessitates less calibration, leading to superior individual SSVEP recognition performance.
Positron emission tomography (PET) using either fluorodeoxyglucose (FDG) or florbetapir (AV45) has consistently demonstrated its effectiveness in diagnosing Alzheimer's disease. However, the prohibitive price and inherent radioactivity of positron emission tomography (PET) have restricted its practical implementation. Magnetic biosilica We present a deep learning model, the 3-dimensional multi-task multi-layer perceptron mixer, employing a multi-layer perceptron mixer architecture, to simultaneously predict FDG-PET and AV45-PET standardized uptake value ratios (SUVRs) using widespread structural magnetic resonance imaging data. This model also enables Alzheimer's disease diagnosis by extracting embedding features from SUVR predictions. The experiment demonstrates the accuracy of the proposed method for FDG/AV45-PET SUVRs, specifically with Pearson's correlation coefficients of 0.66 and 0.61 between the estimated and actual SUVR values. The estimated SUVRs further displayed high sensitivity and specific longitudinal patterns across the different disease states. The proposed method, capitalizing on PET embedding features, significantly outperforms other competing methods in diagnosing Alzheimer's disease and differentiating between stable and progressive mild cognitive impairments across five independent datasets. The ADNI dataset yielded AUC values of 0.968 and 0.776, respectively, while exhibiting improved generalizability to external datasets. Besides, the dominant patches identified in the trained model involve important brain regions crucial to Alzheimer's disease, thus suggesting strong biological interpretability of our proposed method.
The lack of finely categorized labels necessitates a broad-based evaluation of signal quality in current research. The quality assessment of fine-grained electrocardiogram (ECG) signals is addressed in this article using a weakly supervised approach. Continuous segment-level quality scores are derived from coarse labels.
Specifically, a novel network architecture, FGSQA-Net, designed for signal quality evaluation, integrates a feature reduction module and a feature combination module. A series of feature-contracting blocks, each incorporating a residual convolutional neural network (CNN) block and a max pooling layer, are sequentially arranged to produce a feature map representing continuous segments across the spatial domain. The process of aggregating features along the channel dimension produces segment-level quality scores.
To evaluate the proposed approach, two real-world electrocardiogram (ECG) databases and one synthetic dataset were leveraged. Compared to the state-of-the-art beat-by-beat quality assessment method, our method achieved a notable average AUC value of 0.975. Visualizing 12-lead and single-lead signals across a time range of 0.64 to 17 seconds reveals the ability to effectively distinguish between high-quality and low-quality segments at a fine level of detail.
The FGSQA-Net, a flexible and effective system, excels in fine-grained quality assessment for various ECG recordings, demonstrating its suitability for wearable ECG monitoring applications.
This pioneering study meticulously examines fine-grained ECG quality assessment through the lens of weak labels, a methodology applicable to the evaluation of similar physiological signals.
A pioneering study, this research explores fine-grained ECG quality assessment using weak labels, and its methodology can be readily adapted to other physiological signals.
While successfully employed for nuclei detection in histopathological images, deep neural networks require that training and testing data share a similar probability distribution. Although domain shift in histopathology images is widely observed in real-world situations, this issue frequently compromises the performance of deep neural networks for detection. Encouraging results from existing domain adaptation methods notwithstanding, the task of cross-domain nuclei detection is still faced with difficulties. Because atomic nuclei are so small, obtaining a substantial number of nuclear features is an incredibly difficult endeavor, leading to a detrimental influence on the alignment of features. Secondarily, the absence of annotations in the target domain introduced background pixels into some extracted features, making them indistinct and consequently significantly impacting the alignment procedure's accuracy. We propose GNFA, an end-to-end graph-based method for nuclei feature alignment in this paper, aimed at improving cross-domain nuclei detection. Nuclei graph convolutional networks (NGCNs) successfully align nuclei by aggregating information from neighboring nuclei, creating a graph structure rich in features. The Importance Learning Module (ILM) is additionally designed to further prioritize salient nuclear attributes in order to lessen the adverse effect of background pixels in the target domain during the alignment process. IBG1 The GNFA-generated, discriminating node features are effectively utilized by our method to execute feature alignment and efficiently address the issue of domain shift in nuclei detection. Our method, evaluated across a multitude of adaptation scenarios, attains a leading performance in cross-domain nuclei detection, surpassing the performance of existing domain adaptation methods.
Lymphedema, a frequent and debilitating consequence of breast cancer, can impact up to one-fifth of breast cancer survivors. Quality of life (QOL) for patients afflicted by BCRL suffers considerably, presenting a major challenge for healthcare systems. Proactive surveillance and ongoing tracking of lymphedema are essential for crafting personalized treatment strategies for cancer surgery survivors. Pathology clinical This thorough scoping review, therefore, was designed to explore the current methodologies of remote BCRL monitoring and their potential to support telehealth interventions for lymphedema.