admin管理员组文章数量:1558697
网上的Windows环境下截屏的源代码很多,但是看到的都是只能适应单显示器,无法截取桌面扩展到双显示器情况下的完整屏幕。过查找资料和摸索,发现问题的关键就在于正确获得截屏的长宽尺寸。通常,计算屏幕实际大小可以用:
HWND hWnd = GetDesktopWindow();
RECT re;
GetWindowRect(window, &re);
int width = re.right, height = re.bottom;
上面的代码可以获得单显示器缩放比例100%情况下的屏幕分辨率,但这只是虚尺寸,所以若是用户将屏幕缩放比例设置为超出100%,则会导致截屏区域的宽和高计算出错,因而无法截取到完整的屏幕。代码得改成如下所示:
void getPhysicalResolution(int& width, int& height)
{
// 获取窗口当前显示的监视器
// 使用桌面的句柄.
HWND hWnd = GetDesktopWindow();
HMONITOR hMonitor = MonitorFromWindow(hWnd, MONITOR_DEFAULTTONEAREST);
// 获取监视器逻辑宽度与高度
MONITORINFOEX miex;
miex.cbSize = sizeof(miex);
GetMonitorInfo(hMonitor, &miex);
// 获取监视器物理宽度与高度
DEVMODE dm;
dm.dmSize = sizeof(dm);
dm.dmDriverExtra = 0;
EnumDisplaySettings(miex.szDevice, ENUM_CURRENT_SETTINGS, &dm);
width = dm.dmPelsWidth;
height = dm.dmPelsHeight;
}
上述代码可以不受Windows显示缩放比例的影响,获得屏幕的正确物理分辨率。
但是上述代码只能获取主屏的分辨率,无法获得桌面扩展到第二显示器后的完整桌面物理尺寸。解决的方法是调用Windows API中的EnumDisplayMonitors函数,枚举连接到系统的所有显示器,然后计算所有显示器的分辨率累加和,从而得到总的桌面分辨率。代码如下:
typedef struct __tagMonitorProperty
{
public:
long width, height;
long x, y;
HDC hdcMonitor;
HMONITOR hMonitor;
string monitorName;
bool primaryScreenFlag;
} MonitorProperty;
BOOL CALLBACK monitorEnumProc(HMONITOR hMonitor, HDC hdcMonitor, LPRECT lprcMonitor, LPARAM dwData)
{
vector<MonitorProperty> *monitorProperties = (vector<MonitorProperty> *)dwData;
MonitorProperty monitorProperty;
monitorProperty.hMonitor = hMonitor;
monitorProperty.hdcMonitor = hdcMonitor;
MONITORINFOEX miex;
miex.cbSize = sizeof(miex);
GetMonitorInfo(hMonitor, &miex);
monitorProperty.monitorName = {miex.szDevice};
monitorProperty.primaryScreenFlag = (miex.dwFlags == MONITORINFOF_PRIMARY) ? true : false;
DEVMODE dm;
dm.dmSize = sizeof(dm);
dm.dmDriverExtra = 0;
EnumDisplaySettings(miex.szDevice, ENUM_CURRENT_SETTINGS, &dm);
monitorProperty.width = dm.dmPelsWidth;
monitorProperty.height = dm.dmPelsHeight;
monitorProperty.x = dm.dmPosition.x;
monitorProperty.y = dm.dmPosition.y;
(*monitorProperties).push_back(monitorProperty);
return TRUE;
}
void getMultiMonitorPhysicalSize(long& width, long& height)
{
vector<MonitorProperty> monitorProperties;
EnumDisplayMonitors(NULL, NULL, monitorEnumProc, (LPARAM)&monitorProperties);
long maxWidth = 0, maxHeight = 0;
for(MonitorProperty monitorProperty : monitorProperties)
{
maxWidth = (maxWidth < monitorProperty.width) ? monitorProperty.width : maxWidth;
maxHeight = (maxHeight < monitorProperty.height) ? monitorProperty.height : maxHeight;
}
MonitorProperty ms = monitorProperties[monitorProperties.size() - 1];
width = ms.x + ms.width;
height = ms.y + ms.height;
width = (width > maxWidth) ? width : maxWidth;
height = (height > maxHeight) ? height : maxHeight;
}
以上代码的依据在于:Windows环境下,当桌面扩展到副显示器上之后,副显示器DEVMODE结构中的dmPosition结构中,其x、y取值都是从主显示器的宽度和高度之后开始计算。例如,主显示器的分辨率是1920*1080,若副显示器桌面是横向扩展,则副显示器的x点坐标是1920,y点坐标仍旧是0。若副显示器是纵向扩展(能纵向扩展桌面吗?这一点我是猜的),则副显示器x坐标为0,y坐标为1080。基于这一点,只要知道枚举出来的最后一个显示器的{x, y}坐标,加上这个显示器的宽度和高度,就可以得到多显示器组合起来的扩展后桌面的整体宽度和高度。需要说明的是,家里只有两个显示器,所以目前只验证了双显示器下上述算法的正确性,没条件验证更多显示器情况下的情况。
得到了完整桌面的宽度和高度,截屏的事情就好办了。截屏代码如下所示:
void catchScreen(char *screenshotFilename)
{
long width, height;
getMultiMonitorPhysicalSize(width, height);
ostringstream oss;
oss << "capturing screen - width: " << width << ", height: " << height;
runtimeLogger.write(oss.str(), 0, 0, 0);
long imageSize = width * height * 4L;
char *buf = new char[imageSize];
HWND hDesktopWindow = GetDesktopWindow();
HDC displayDeviceContext = GetDC(hDesktopWindow);
HDC memoryDeviceContext = CreateCompatibleDC(0);
HBITMAP hbm = CreateCompatibleBitmap(displayDeviceContext, width, height);
SelectObject(memoryDeviceContext, hbm);
StretchBlt(memoryDeviceContext, 0, 0, width, height, displayDeviceContext, 0, 0, width, height, SRCCOPY);
BITMAPINFO bi;
bi.bmiHeader.biSize = sizeof(bi.bmiHeader);
bi.bmiHeader.biWidth = width;
bi.bmiHeader.biHeight = height;
bi.bmiHeader.biPlanes = 1;
bi.bmiHeader.biBitCount = 32;
bi.bmiHeader.biCompression = 0;
bi.bmiHeader.biSizeImage = 0;
GetDIBits(memoryDeviceContext, hbm, 0, height, buf, &bi, DIB_RGB_COLORS); // MSDN上查不到这个函数?
BITMAPFILEHEADER bif;
bif.bfType = MAKEWORD('B', 'M');
bif.bfSize = imageSize + 54;
bif.bfOffBits = sizeof(BITMAPFILEHEADER) + sizeof(BITMAPINFOHEADER); / = 54;
BITMAPINFOHEADER bii;
bii.biSize = sizeof(BITMAPINFOHEADER); / = 40;
bii.biWidth = width;
bii.biHeight = height;
bii.biPlanes = 1;
bii.biBitCount = 32;
bii.biCompression = 0;
bii.biSizeImage = imageSize;
ofstream ofs(screenshotFilename, ofstream::binary | ofstream::out);
ofs.write((const char *)&bif, sizeof bif);
ofs.write((const char *)&bii, sizeof bii);
ofs.write(buf, imageSize);
delete[] buf;
DeleteDC(memoryDeviceContext);
ReleaseDC(hDesktopWindow, displayDeviceContext);
}
版权声明:本文标题:Windows下双显示器截屏方法 内容由热心网友自发贡献,该文观点仅代表作者本人, 转载请联系作者并注明出处:https://m.elefans.com/dongtai/1727403178a1113084.html, 本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌抄袭侵权/违法违规的内容,一经查实,本站将立刻删除。
发表评论