Heart rate (HR) and respiratory rate (RR) are two crucial health parameters. Recent technological advancements have indicated the potential of general-purpose RGB cameras to capture these key health indicators without any physical contact. This research dissertation documents our endeavors dedicated to the development of effective frameworks for HR and RR assessments in realistic conditions. To this end, this research work introduces two effective frameworks, namely CPulse and VPulse, for extracting HR information and OPOIRES for RR estimation from facial RGB videos and a newly created dataset (ASIPL) featuring videos with several day-to-day scenarios. The performance results are satisfactory in terms of accuracy, computational viability, and tolerance to skin tone variation. However, performance degradation due to extreme head and facial motion is noted. Consequently, another algorithm, VPulse, is developed to alleviate multiple accuracy-limiting agents by integrating advanced computer-vision algorithms and employing an effective face-conforming mask to isolate informative pixels in the frame. The algorithm undergoes comprehensive testing, exhibiting superior performance, specifically in more intense motion scenarios. For RR estimation, the dissertation introduces OPOIRES&mdashan end-to-end automated algorithm developed after a comprehensive understanding of respiratory-induced motion (RIM) behavior in the frontal body of numerous individuals. The approach adopts a distinctive local points-of-interest (POIs)-based methodology, mitigating potential destructive mixing of RIM associated with distinctive locations due to phase and amplitude discrepancies. Experimental results demonstrate high accuracy in RR estimation and significant tolerance to factors such as illumination variation and video compression. In conclusion, these findings affirm the applicability of the proposed techniques in multiple realistic everyday scenarios, supporting their potential integration into deployable applications.