Searched defs:page (Results 1 - 3 of 3) sorted by relevance
/lib/ |
H A D | kfifo.c | 314 struct page *page; local 323 page = virt_to_page(buf); 328 struct page *npage; 333 if (page_to_phys(page) != page_to_phys(npage) - l) { 334 sg_set_page(sgl, page, l - off, off); 338 page = npage; 343 sg_set_page(sgl, page, len, off);
|
H A D | swiotlb.c | 134 /* Note that this doesn't work with highmem page */ 455 * For mappings greater than a page, we limit the stride (and 456 * hence alignment) to a page size. 730 dma_addr_t swiotlb_map_page(struct device *dev, struct page *page, argument 735 phys_addr_t map, phys = page_to_phys(page) + offset; 792 * phys_to_virt doesn't work with hihgmem page but we could 793 * call dma_mark_clean() with hihgmem page here. However, we
|
H A D | dma-debug.c | 59 * @dev: 'dev' argument to dma_map_{page|single|sg} or dma_alloc_coherent 60 * @type: single, page, sg, coherent 61 * @pfn: page frame of the start address 145 static const char *type2name[4] = { "single", "page", 428 * dma_alloc_coherent/dma_map_page, initial cacheline in each page of a 431 * dma_unmap_{single|sg|page} or dma_free_coherent delete the entry. If 447 * warning if any cachelines in the given page are in the active set. 553 * debug_dma_assert_idle() - assert that a page is not undergoing dma 554 * @page: page t 560 debug_dma_assert_idle(struct page *page) argument 1240 debug_dma_map_page(struct device *dev, struct page *page, size_t offset, size_t size, int direction, dma_addr_t dma_addr, bool map_single) argument [all...] |
Completed in 420 milliseconds