Light field photography advances upon current digital imaging technology by making it possible to adjust focus after capturing a photograph. This capability is enabled by an array of microlenses mounted above the image sensor, allowing the camera to simultaneously capture both light intensity and approximate angle of incidence. The ability to adjust focus after capture makes light field photography well-suited for computer vision techniques that aim to determine the depth of objects in a scene, such as depth from focus/defocus. Another commonly known method for extracting depth from images is stereo matching, which seeks to obtain a disparity map linking corresponding objects that appear in both images. This disparity map, along with the known geometry of the cameras can be used to compute the depth of objects in the scene. Whereas depth from focus/defocus using traditional cameras requires multiple sequential image captures, light field cameras capture these images simultaneously, making them more suitable for use in conjunction with stereo matching. A method is presented that combines both light field photography and stereo matching to achieve an enhanced disparity map. This joint approach takes advantage of the unique strengths of both methods to produce a result that is more accurate than either method produces in isolation.