Recent advances and ubiquity of Unmanned Aerial System (UAS) capabilities have expedited the need for counter UAS capabilities. While visible imaging and acoustic sensors have been widely used to detect/track UAS signatures, several challenges remain: (1) images that are acquired using EO sensors that are sensitive to visible light (0.4-0.7μm) contain a significant amount of clutter leading to false detections and have limited feasibility under low variable lighting condition (e.g., nighttime), (2) visible imagery is more susceptible to nonlinear, time varying atmospheric distortions, and (3) the discriminative power spectrum of UAS acoustic signatures are sensitive to amplifier gains, microphone array directional response characteristics, and obfuscation by non-white acoustic clutter/noise. Instead, we consider using thermal infrared - specifically longwave infrared (7-14μm) - cameras to detect UAS platforms. Additionally, we consider detecting UAS platforms using a UAS-based counter surveillance platform equipped with the thermal infrared imaging payload, opposed to conventional ground-based counter UAS imaging systems. Here, challenges arise from limited discriminative details and quality in thermal imagery (compared with visible imagery) due variations in sensors, standoff distance, and motion. Therefore, we present a new dataset containing both thermal and visible imagery designed to evaluate UAS detection performance using existing object detectors, such as YOLOv5, RetinaNet, and SSD. More importantly, we present a novel deep domain adaptive framework to optimize the object detectors for more discriminative detection performance and to enhance both daytime and nighttime operations. Lastly, we provide extensive analysis that compares our framework to state-of-the-art domain adaptation methods using our dataset.