b4bbb107d73bbc0d92c9ae7fd8e69580aa9381e7 |
|
27-Jun-2014 |
Thierry Reding <treding@nvidia.com> |
dma-mapping: Provide write-combine allocations Provide an implementation for dma_{alloc,free,mmap}_writecombine() when the architecture supports DMA attributes. Signed-off-by: Thierry Reding <treding@nvidia.com> Acked-by: Arnd Bergmann <arnd@arndb.de> Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com>
|
812b99e4b024d6f83e9281ec1a0bd4bf63dad90f |
|
24-Apr-2014 |
Santosh Shilimkar <santosh.shilimkar@ti.com> |
ARM: dma: implement set_arch_dma_coherent_ops() Implement the set_arch_dma_coherent_ops() for ARM architecture. Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Cc: Russell King <linux@arm.linux.org.uk> Cc: Arnd Bergmann <arnd@arndb.de> Cc: Olof Johansson <olof@lixom.net> Cc: Grant Likely <grant.likely@linaro.org> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Linus Walleij <linus.walleij@linaro.org> Reviewed-by: Rob Herring <robh@kernel.org> Signed-off-by: Grygorii Strashko <grygorii.strashko@ti.com> Signed-off-by: Santosh Shilimkar <santosh.shilimkar@ti.com>
|
6ce0d20016925d031f1e24d64302e4c976d7cec6 |
|
24-Apr-2014 |
Grygorii Strashko <grygorii.strashko@ti.com> |
ARM: dma: Use dma_pfn_offset for dma address translation In most of cases DMA addresses can be performed using offset value of Bus address space relatively to physical address space as following: PFN->DMA: __pfn_to_phys(pfn + [-]dma_pfn_offset) DMA->PFN: __phys_to_pfn(dma_addr) + [-]dma_pfn_offset Thanks to Russell King for suggesting the optimised macro's for conversion. Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Cc: Russell King <linux@arm.linux.org.uk> Cc: Arnd Bergmann <arnd@arndb.de> Cc: Olof Johansson <olof@lixom.net> Cc: Grant Likely <grant.likely@linaro.org> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Linus Walleij <linus.walleij@linaro.org> Reviewed-by: Rob Herring <robh@kernel.org> Signed-off-by: Grygorii Strashko <grygorii.strashko@ti.com> Signed-off-by: Santosh Shilimkar <santosh.shilimkar@ti.com>
|
fbd989b1d73e3b3565dad5227a581e6f456c895f |
|
30-Oct-2013 |
Stefano Stabellini <stefano.stabellini@eu.citrix.com> |
arm: make SWIOTLB available IOMMU_HELPER is needed because SWIOTLB calls iommu_is_span_boundary, provided by lib/iommu_helper.c. Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com> Reviewed-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> CC: will.deacon@arm.com Acked-by: Russell King <rmk+kernel@arm.linux.org.uk> Changes in v9: - remove uneeded include asm/cacheflush.h; - just return 0 if !dev->dma_mask in dma_capable. Changes in v8: - use __phys_to_pfn and __pfn_to_phys. Changes in v7: - dma_mark_clean: empty implementation; - in dma_capable use coherent_dma_mask if dma_mask hasn't been allocated. Changes in v6: - check for dev->dma_mask being NULL in dma_capable. Changes in v5: - implement dma_mark_clean using dmac_flush_range. Changes in v3: - dma_capable: do not treat dma_mask as a limit; - remove SWIOTLB dependency on NEED_SG_DMA_LENGTH.
|
26ba47b18318abe7dadbe9294a611c0e932651d8 |
|
01-Aug-2013 |
Santosh Shilimkar <santosh.shilimkar@ti.com> |
ARM: 7805/1: mm: change max*pfn to include the physical offset of memory Most of the kernel code assumes that max*pfn is maximum pfns because the physical start of memory is expected to be PFN0. Since this assumption is not true on ARM architectures, the meaning of max*pfn is number of memory pages. This is done to keep drivers happy which are making use of of these variable to calculate the dma bounce limit using dma_mask. Now since we have a architecture override possibility for DMAable maximum pfns, lets make meaning of max*pfns as maximum pnfs on ARM as well. Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will.deacon@arm.com> Cc: Nicolas Pitre <nicolas.pitre@linaro.org> Signed-off-by: Santosh Shilimkar <santosh.shilimkar@ti.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
c7e9bc548325f19635e7ac7cd5f3ec587228952e |
|
18-Oct-2013 |
Stefano Stabellini <stefano.stabellini@eu.citrix.com> |
arm/xen: get_dma_ops: return xen_dma_ops if we are running as xen_initial_domain We can't simply override arm_dma_ops with xen_dma_ops because devices are allowed to have their own dma_ops and they take precedence over arm_dma_ops. When running on Xen as initial domain, we always want xen_dma_ops to be the one in use. We introduce __generic_dma_ops to allow xen_dma_ops functions to call back to the native implementation. Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com> Suggested-by: Catalin Marinas <catalin.marinas@arm.com> Acked-by: Russell King <rmk+kernel@arm.linux.org.uk> CC: will.deacon@arm.com CC: linux@arm.linux.org.uk Changes in v7: - return xen_dma_ops only if we are the initial domain; - rename __get_dma_ops to __generic_dma_ops.
|
06e6295bcecefea9dc29fc84b5fd6848061365a0 |
|
15-Oct-2013 |
Stefano Stabellini <stefano.stabellini@eu.citrix.com> |
arm: make SWIOTLB available IOMMU_HELPER is needed because SWIOTLB calls iommu_is_span_boundary, provided by lib/iommu_helper.c. Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com> Reviewed-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> CC: will.deacon@arm.com CC: linux@arm.linux.org.uk Changes in v8: - use __phys_to_pfn and __pfn_to_phys. Changes in v7: - dma_mark_clean: empty implementation; - in dma_capable use coherent_dma_mask if dma_mask hasn't been allocated. Changes in v6: - check for dev->dma_mask being NULL in dma_capable. Changes in v5: - implement dma_mark_clean using dmac_flush_range. Changes in v3: - dma_capable: do not treat dma_mask as a limit; - remove SWIOTLB dependency on NEED_SG_DMA_LENGTH.
|
a0157573041403e7507a6f3f32279fc14ff5c02e |
|
22-Oct-2012 |
Ming Lei <ming.lei@canonical.com> |
ARM: dma-mapping: support debug_dma_mapping_error Without the patch, kind of below warning will be dumped if DMA-API debug is enabled: [ 11.069763] ------------[ cut here ]------------ [ 11.074645] WARNING: at lib/dma-debug.c:948 check_unmap+0x770/0x860() [ 11.081420] ehci-omap ehci-omap.0: DMA-API: device driver failed to check map error[device address=0x0000000 0adb78e80] [size=8 bytes] [mapped as single] [ 11.095611] Modules linked in: Cc: Russell King <linux@arm.linux.org.uk> Cc: Marek Szyprowski <m.szyprowski@samsung.com> Signed-off-by: Ming Lei <ming.lei@canonical.com> Signed-off-by: Joerg Roedel <joro@8bytes.org>
|
87b54e786afda828984645a8364a228ae8ac71f4 |
|
21-Nov-2012 |
Gregory CLEMENT <gregory.clement@free-electrons.com> |
arm: dma mapping: Export a dma ops function arm_dma_set_mask Expose another DMA operations function: arm_dma_set_mask. This function will be added to a custom DMA ops for Armada 370/XP. Depending of its configuration Armada 370/XP can be set as a "nearly" coherent architecture. In this case the DMA ops is made of: - specific functions for this architecture - already exposed arm DMA related functions - the arm_dma_set_mask which was not exposed yet. Signed-off-by: Gregory CLEMENT <gregory.clement@free-electrons.com> Acked-by: Marek Szyprowski <m.szyprowski@samsung.com>
|
16cf8a80a8f0f4757427b17cdfb6c4897674db68 |
|
08-Nov-2012 |
Marek Szyprowski <m.szyprowski@samsung.com> |
ARM: dma-mapping: remove init_consistent_dma_size() stub Since commit e9da6e9905e639b0 ("ARM: dma-mapping: remove custom consistent dma region") setting consistent dma memory size is not longer required. All calls to this function has been already removed, so the init_consistent_dma_size() stub can also be gone. Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com>
|
697575896670ba9e76760ce8bbc1f5a3001967d6 |
|
26-Oct-2012 |
Marek Szyprowski <m.szyprowski@samsung.com> |
Revert "ARM: dma-mapping: support debug_dma_mapping_error" This reverts commit 871ae57adc5ed092c1341f411514d0e8482e2611, which is scheduled for v3.8 and accidently got into v3.7-rc series. Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com>
|
871ae57adc5ed092c1341f411514d0e8482e2611 |
|
22-Oct-2012 |
Ming Lei <ming.lei@canonical.com> |
ARM: dma-mapping: support debug_dma_mapping_error Without the patch, kind of below warning will be dumped if DMA-API debug is enabled: [ 11.069763] ------------[ cut here ]------------ [ 11.074645] WARNING: at lib/dma-debug.c:948 check_unmap+0x770/0x860() [ 11.081420] ehci-omap ehci-omap.0: DMA-API: device driver failed to check map error[device address=0x0000000 0adb78e80] [size=8 bytes] [mapped as single] [ 11.095611] Modules linked in: Cc: Russell King <linux@arm.linux.org.uk> Cc: Marek Szyprowski <m.szyprowski@samsung.com> Signed-off-by: Ming Lei <ming.lei@canonical.com> Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com>
|
dd37e9405a8e85be49a60b2530efeb5f06bcb753 |
|
21-Aug-2012 |
Rob Herring <rob.herring@calxeda.com> |
ARM: add coherent dma ops arch_is_coherent is problematic as it is a global symbol. This doesn't work for multi-platform kernels or platforms which can support per device coherent DMA. This adds arm_coherent_dma_ops to be used for devices which connected coherently (i.e. to the ACP port on Cortex-A9 or A15). The arm_dma_ops are modified at boot when arch_is_coherent is true. Signed-off-by: Rob Herring <rob.herring@calxeda.com> Cc: Russell King <linux@arm.linux.org.uk> Cc: Marek Szyprowski <m.szyprowski@samsung.com> Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com>
|
6e5267aa543817015edb4a65c66e15f9809f92bd |
|
20-Aug-2012 |
Marek Szyprowski <m.szyprowski@samsung.com> |
ARM: DMA-Mapping: add function for setting coherent pool size from platform code Some platforms might require to increase atomic coherent pool to make sure that their device will be able to allocate all their buffers from atomic context. This function can be also used to decrease atomic coherent pool size if coherent allocations are not used for the given sub-platform. Suggested-by: Josh Coombs <josh.coombs@gmail.com> Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com>
|
dc2832e1e7db3f9ad465d2fe894bd69ef05d1e4b |
|
13-Jun-2012 |
Marek Szyprowski <m.szyprowski@samsung.com> |
ARM: dma-mapping: add support for dma_get_sgtable() This patch adds support for dma_get_sgtable() function which is required to let drivers to share the buffers allocated by DMA-mapping subsystem. Generic implementation based on virt_to_page() is not suitable for ARM dma-mapping subsystem. Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com> Reviewed-by: Kyungmin Park <kyungmin.park@samsung.com>
|
64ccc9c033c6089b2d426dad3c56477ab066c999 |
|
14-Jun-2012 |
Marek Szyprowski <m.szyprowski@samsung.com> |
common: dma-mapping: add support for generic dma_mmap_* calls Commit 9adc5374 ('common: dma-mapping: introduce mmap method') added a generic method for implementing mmap user call to dma_map_ops structure. This patch converts ARM and PowerPC architectures (the only providers of dma_mmap_coherent/dma_mmap_writecombine calls) to use this generic dma_map_ops based call and adds a generic cross architecture definition for dma_mmap_attrs, dma_mmap_coherent, dma_mmap_writecombine functions. The generic mmap virt_to_page-based fallback implementation is provided for architectures which don't provide their own implementation for mmap method. Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com> Reviewed-by: Kyungmin Park <kyungmin.park@samsung.com>
|
e9da6e9905e639b0f842a244bc770b48ad0523e9 |
|
30-Jul-2012 |
Marek Szyprowski <m.szyprowski@samsung.com> |
ARM: dma-mapping: remove custom consistent dma region This patch changes dma-mapping subsystem to use generic vmalloc areas for all consistent dma allocations. This increases the total size limit of the consistent allocations and removes platform hacks and a lot of duplicated code. Atomic allocations are served from special pool preallocated on boot, because vmalloc areas cannot be reliably created in atomic context. Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com> Reviewed-by: Kyungmin Park <kyungmin.park@samsung.com> Reviewed-by: Minchan Kim <minchan@kernel.org>
|
f99d60341238fe73fc514129cd9ae4e44e1b2c47 |
|
16-May-2012 |
Marek Szyprowski <m.szyprowski@samsung.com> |
ARM: dma-mapping: use alloc, mmap, free from dma_ops This patch converts dma_alloc/free/mmap_{coherent,writecombine} functions to use generic alloc/free/mmap methods from dma_map_ops structure. A new DMA_ATTR_WRITE_COMBINE DMA attribute have been introduced to implement writecombine methods. Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com> Acked-by: Kyungmin Park <kyungmin.park@samsung.com> Acked-by: Arnd Bergmann <arnd@arndb.de> Tested-By: Subash Patel <subash.ramaswamy@linaro.org>
|
15237e1f505b3e5c2276f240b01cd2133e110cbc |
|
10-Feb-2012 |
Marek Szyprowski <m.szyprowski@samsung.com> |
ARM: dma-mapping: move all dma bounce code to separate dma ops structure This patch removes dma bounce hooks from the common dma mapping implementation on ARM architecture and creates a separate set of dma_map_ops for dma bounce devices. Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com> Acked-by: Kyungmin Park <kyungmin.park@samsung.com> Tested-By: Subash Patel <subash.ramaswamy@linaro.org>
|
2dc6a016bbedf18f18ad73997e5338307d6dbde9 |
|
10-Feb-2012 |
Marek Szyprowski <m.szyprowski@samsung.com> |
ARM: dma-mapping: use asm-generic/dma-mapping-common.h This patch modifies dma-mapping implementation on ARM architecture to use common dma_map_ops structure and asm-generic/dma-mapping-common.h helpers. Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com> Acked-by: Kyungmin Park <kyungmin.park@samsung.com> Tested-By: Subash Patel <subash.ramaswamy@linaro.org>
|
a227fb92a0f5f0dd8282719386e9b3a29f0d16b2 |
|
10-Feb-2012 |
Marek Szyprowski <m.szyprowski@samsung.com> |
ARM: dma-mapping: remove offset parameter to prepare for generic dma_ops This patch removes the need for the offset parameter in dma bounce functions. This is required to let dma-mapping framework on ARM architecture to use common, generic dma_map_ops based dma-mapping helpers. Background and more detailed explaination: dma_*_range_* functions are available from the early days of the dma mapping api. They are the correct way of doing a partial syncs on the buffer (usually used by the network device drivers). This patch changes only the internal implementation of the dma bounce functions to let them tunnel through dma_map_ops structure. The driver api stays unchanged, so driver are obliged to call dma_*_range_* functions to keep code clean and easy to understand. The only drawback from this patch is reduced detection of the dma api abuse. Let us consider the following code: dma_addr = dma_map_single(dev, ptr, 64, DMA_TO_DEVICE); dma_sync_single_range_for_cpu(dev, dma_addr+16, 0, 32, DMA_TO_DEVICE); Without the patch such code fails, because dma bounce code is unable to find the bounce buffer for the given dma_address. After the patch the above sync call will be equivalent to: dma_sync_single_range_for_cpu(dev, dma_addr, 16, 32, DMA_TO_DEVICE); which succeeds. I don't consider this as a real problem, because DMA API abuse should be caught by debug_dma_* function family. This patch lets us to simplify the internal low-level implementation without chaning the driver visible API. Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com> Acked-by: Kyungmin Park <kyungmin.park@samsung.com> Tested-By: Subash Patel <subash.ramaswamy@linaro.org>
|
553ac78877242b6d8b591323731df304140d0f99 |
|
29-Feb-2012 |
Marek Szyprowski <m.szyprowski@samsung.com> |
ARM: dma-mapping: introduce DMA_ERROR_CODE constant Replace all uses of ~0 with DMA_ERROR_CODE, what should make the code easier to read. Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com> Acked-by: Kyungmin Park <kyungmin.park@samsung.com> Tested-By: Subash Patel <subash.ramaswamy@linaro.org>
|
01f461a3a4321a9f98b6b508f32d2396c5704b7c |
|
23-Aug-2011 |
Catalin Marinas <catalin.marinas@arm.com> |
ARM: 7058/1: LPAE: Cast the dma_addr_t argument to unsigned long in dma_to_virt This is to avoid a compiler warning when invoking the __bus_to_virt() macro. The dma_to_virt() function gets addresses within the 32-bit range. Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
99d1717dd7fecf2b10195b0d864323b952b4eba0 |
|
02-Aug-2011 |
Jon Medhurst <tixy@yxit.co.uk> |
ARM: Add init_consistent_dma_size() This function can be called during boot to increase the size of the consistent DMA region above it's default value of 2MB. It must be called before the memory allocator is initialised, i.e. before any core_initcall. Signed-off-by: Jon Medhurst <tixy@yxit.co.uk> Acked-by: Nicolas Pitre <nicolas.pitre@linaro.org>
|
022ae537b23cb14a391565e9ad9e9945f4b17138 |
|
08-Jul-2011 |
Russell King <rmk+kernel@arm.linux.org.uk> |
ARM: dma: replace ISA_DMA_THRESHOLD with a variable ISA_DMA_THRESHOLD has been unused by non-arch code, so lets now get rid of it from ARM by replacing it with arm_dma_zone_mask. Move dma_supported() and dma_set_mask() out of line, and have dma_supported() check this new variable instead. Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
3973c337759cd201773a0ecc7b6f39f1ea2a6287 |
|
08-Jul-2011 |
Russell King <rmk+kernel@arm.linux.org.uk> |
ARM: dmabounce: simplify dma_set_mask() Simplify the dmabounce specific code in dma_set_mask(). We can just omit setting the dma mask if dmabounce is enabled (we will have already set dma mask via callbacks when the device is created in that case.) Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
0703ed2a6b260cd743adf49a8281eb064d728832 |
|
04-Jul-2011 |
Russell King <rmk+kernel@arm.linux.org.uk> |
ARM: dmabounce: get rid of dma_needs_bounce global function Pass the device type specific needs_bounce function in at dmabounce register time, avoiding the need for a platform specific global function to do this. Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
8021a4a048a85906302bd0236f3d125473be65b1 |
|
03-Jul-2011 |
Russell King <rmk+kernel@arm.linux.org.uk> |
ARM: dma-mapping: define dma_(un)?map_single in terms of dma_(un)?map_page Use dma_map_page()/dma_unmap_page() internals to handle dma_map_single() and dma_unmap_single(). Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
24056f525051a9e186af28904b396320e18bf9a0 |
|
03-Jan-2011 |
Russell King <rmk+kernel@arm.linux.org.uk> |
ARM: DMA: add support for DMA debugging Add ARM support for the DMA debug infrastructure, which allows the DMA API usage to be debugged. Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
9eedd96301cad8ab58ee8c1e579677d0a75c2ba1 |
|
03-Jan-2011 |
Russell King <rmk+kernel@arm.linux.org.uk> |
ARM: DMA: Replace page_to_dma()/dma_to_page() with pfn_to_dma()/dma_to_pfn() Replace the page_to_dma() and dma_to_page() macros with their PFN equivalents. This allows us to map parts of memory which do not have a struct page allocated to them to bus addresses. This will be used internally by dma_alloc_coherent()/dma_alloc_writecombine(). Build tested on Versatile, OMAP1, IOP13xx and KS8695. Tested-by: Janusz Krzysztofik <jkrzyszt@tis.icnet.pl> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
0485e18bc4112a3b548baa314c24bfbece4d156b |
|
05-Sep-2010 |
Russell King <rmk+kernel@arm.linux.org.uk> |
Revert "[ARM] pxa: remove now unnecessary dma_needs_bounce()" This reverts commit 4fa5518, which causes a compilation regression for IXP4xx platforms. Reported-by: Richard Cochran <richardcochran@gmail.com> Acked-by: Eric Miao <eric.y.miao@gmail.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
3b9c6c11f519718d618f5d7c9508daf78b207f6f |
|
11-Aug-2010 |
FUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp> |
dma-mapping: remove dma_is_consistent API Architectures implement dma_is_consistent() in different ways (some misinterpret the definition of API in DMA-API.txt). So it hasn't been so useful for drivers. We have only one user of the API in tree. Unlikely out-of-tree drivers use the API. Even if we fix dma_is_consistent() in some architectures, it doesn't look useful at all. It was invented long ago for some old systems that can't allocate coherent memory at all. It's better to export only APIs that are definitely necessary for drivers. Let's remove this API. Signed-off-by: FUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp> Cc: James Bottomley <James.Bottomley@HansenPartnership.com> Reviewed-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> Cc: <linux-arch@vger.kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
|
4565f0170dfc849b3629c27d769db800467baa62 |
|
11-Aug-2010 |
FUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp> |
dma-mapping: unify dma_get_cache_alignment implementations dma_get_cache_alignment returns the minimum DMA alignment. Architectures defines it as ARCH_DMA_MINALIGN (formally ARCH_KMALLOC_MINALIGN). So we can unify dma_get_cache_alignment implementations. Note that some architectures implement dma_get_cache_alignment wrongly. dma_get_cache_alignment() should return the minimum DMA alignment. So fully-coherent architectures should return 1. This patch also fixes this issue. Signed-off-by: FUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp> Cc: <linux-arch@vger.kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
|
4fa5518c65df7a2c4b6c58061ac53d0b01228042 |
|
05-Jun-2010 |
Eric Miao <eric.y.miao@gmail.com> |
[ARM] pxa: remove now unnecessary dma_needs_bounce() With a correct dev->dma_mask before calling dmabounce_register_dev(), dma_needs_bounce() is not necessary. The sa1111, though, is a bit complicated. Until it's fully understood and fixed, dma_needs_bounce() for sa1111 is kept if CONFIG_SA1111 is enabled with no side effect (with the condition of machine_is_*) Thanks for Mike Rapoport to fix one error in the original version of the patch and get this tested. Acked-by: Mike Rapoport <mike@compulab.co.il> Signed-off-by: Eric Miao <eric.y.miao@gmail.com>
|
6fee48cd330c68332f9712bc968d934a1a84a32a |
|
11-Mar-2010 |
FUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp> |
dma-mapping: arm: use generic pci_set_dma_mask and pci_set_consistent_dma_mask This converts arm to the generic pci_set_dma_mask and pci_set_consistent_dma_mask (removes HAVE_ARCH_PCI_SET_DMA_MASK for dmabounce). Signed-off-by: FUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp> Looked-over-by: Russell King <rmk+kernel@arm.linux.org.uk> Cc: Jesse Barnes <jbarnes@virtuousgeek.org> Cc: Greg KH <greg@kroah.com> Cc: Kay Sievers <kay.sievers@vrfy.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
|
4ea0d7371e808628d11154b0d44140b70f05b998 |
|
24-Nov-2009 |
Russell King <rmk+kernel@arm.linux.org.uk> |
ARM: dma-mapping: push buffer ownership down into dma-mapping.c Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk> Tested-By: Santosh Shilimkar <santosh.shilimkar@ti.com>
|
18eabe2347ae7a11b3db768695913724166dfb0e |
|
31-Oct-2009 |
Russell King <rmk+kernel@arm.linux.org.uk> |
ARM: dma-mapping: introduce the idea of buffer ownership The DMA API has the notion of buffer ownership; make it explicit in the ARM implementation of this API. This gives us a set of hooks to allow us to deal with CPU cache issues arising from non-cache coherent DMA. Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk> Tested-By: Santosh Shilimkar <santosh.shilimkar@ti.com> Tested-By: Jamie Iles <jamie@jamieiles.com>
|
29cb8d0d249f6b8fa33683cc17622ff16ada834c |
|
31-Oct-2009 |
Russell King <rmk+kernel@arm.linux.org.uk> |
ARM: dma-mapping: split dma_unmap_page() from dma_unmap_single() We will need to treat dma_unmap_page() differently from dma_unmap_single() Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk> Tested-By: Jamie Iles <jamie@jamieiles.com>
|
ef1baed8870d1eebb0c08d9a466e703f1a21b484 |
|
31-Oct-2009 |
Russell King <rmk+kernel@arm.linux.org.uk> |
ARM: dma-mapping: provide dma_to_page() Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk> Tested-By: Jamie Iles <jamie@jamieiles.com>
|
1c4a4f48a14861a567c8861355bc8252da3a003f |
|
31-Oct-2009 |
Russell King <rmk+kernel@arm.linux.org.uk> |
ARM: dma-mapping: simplify page_to_dma() and __pfn_to_bus() The non-highmem() and the __pfn_to_bus() based page_to_dma() both compile to the same code, so its pointless having these two different approaches. Use the __pfn_to_bus() based version. Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk> Tested-By: Jamie Iles <jamie@jamieiles.com> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
|
58edb515724f9e63e569536d01ac8d8f8ddb367a |
|
09-Sep-2008 |
Nicolas Pitre <nico@cam.org> |
[ARM] make page_to_dma() highmem aware If a machine class has a custom __virt_to_bus() implementation then it must provide a __arch_page_to_dma() implementation as well which is _not_ based on page_address() to support highmem. This patch fixes existing __arch_page_to_dma() and provide a default implementation otherwise. The default implementation for highmem is based on __pfn_to_bus() which is defined only when no custom __virt_to_bus() is provided by the machine class. That leaves only ebsa110 and footbridge which cannot support highmem until they provide their own __arch_page_to_dma() implementation. But highmem support on those legacy platforms with limited memory is certainly not a priority. Signed-off-by: Nicolas Pitre <nico@marvell.com>
|
43377453af83b8ff8c1c731da1508bd6b84ebfea |
|
13-Mar-2009 |
Nicolas Pitre <nico@cam.org> |
[ARM] introduce dma_cache_maint_page() This is a helper to be used by the DMA mapping API to handle cache maintenance for memory identified by a page structure instead of a virtual address. Those pages may or may not be highmem pages, and when they're highmem pages, they may or may not be virtually mapped. When they're not mapped then there is no L1 cache to worry about. But even in that case the L2 cache must be processed since unmapped highmem pages can still be L2 cached. Signed-off-by: Nicolas Pitre <nico@marvell.com>
|
1124d6d21f80ec10cc962e2961c21a8dd1e0ca6a |
|
20-Oct-2008 |
Russell King <rmk@dyn-67.arm.linux.org.uk> |
[ARM] dma: correct dma_supported() implementation dma_supported() is supposed to indicate whether the system can support the DMA mask it was passed, which depends on the maximal address which can be returned for DMA allocations. If the mask is smaller than that, we are unable to guarantee that the driver can reliably obtain suitable memory. Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
9fa767928fe738aba8e99dae511e91f02fe20b28 |
|
13-Nov-2008 |
Russell King <rmk@dyn-67.arm.linux.org.uk> |
[ARM] dma-mapping: fix compiler warning arch/arm/mm/dma-mapping.c: In function `dma_sync_sg_for_cpu': arch/arm/mm/dma-mapping.c:588: warning: statement with no effect Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
7807c6098a716567fe408775c1c1999467088305 |
|
30-Sep-2008 |
Russell King <rmk@dyn-67.arm.linux.org.uk> |
[ARM] dma: fix some comments in dma-mapping.h ... to prevent people being mislead. Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
309dbbabee7b19e003e1ba4b98f43d28f390a84e |
|
29-Sep-2008 |
Russell King <rmk@dyn-67.arm.linux.org.uk> |
[ARM] dma: don't touch cache on dma_*_for_cpu() As per the dma_unmap_* calls, we don't touch the cache when a DMA buffer transitions from device to CPU ownership. Presently, no problems have been identified with speculative cache prefetching which in itself is a new feature in later architectures. We may have to revisit the DMA API later for these architectures anyway. Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
0e18b5d7c6339311f1e32e7b186ae3556c5b6d33 |
|
29-Sep-2008 |
Russell King <rmk@dyn-67.arm.linux.org.uk> |
[ARM] dma: add validation of DMA params Validate the direction argument like x86 does. In addition, validate the dma_unmap_* parameters against those passed to dma_map_* when using the DMA bounce code. Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
3216a97bb0d5166ec5795aa3db1c3a02415ac060 |
|
25-Sep-2008 |
Russell King <rmk@dyn-67.arm.linux.org.uk> |
[ARM] dma: coding style cleanups Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
125ab12acf64ff86b55d20e14db20becd917b7c4 |
|
25-Sep-2008 |
Russell King <rmk@dyn-67.arm.linux.org.uk> |
[ARM] dma: fix dmabounce dma_sync_xxx() implementations The dmabounce dma_sync_xxx() implementation have been broken for quite some time; they all copy data between the DMA buffer and the CPU visible buffer no irrespective of the change of ownership. (IOW, a DMA_FROM_DEVICE mapping copies data from the DMA buffer to the CPU buffer during a call to dma_sync_single_for_device().) Fix it by getting rid of sync_single(), moving the contents into the recently created dmabounce_sync_for_xxx() functions and adjusting appropriately. This also makes it possible to properly support the DMA range sync functions. Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
8c8a0ec57ee285ff407e9a64b3a5a37eaf800ad8 |
|
25-Sep-2008 |
Russell King <rmk@dyn-67.arm.linux.org.uk> |
[ARM] dma: use new dmabounce_sync_for_xxx() for dma_sync_single_xxx() Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
2638b4dbe768aba023a06acd8e7eba708bb76ee6 |
|
25-Sep-2008 |
Russell King <rmk@dyn-67.arm.linux.org.uk> |
[ARM] dma: Reduce to one dma_sync_sg_* implementation Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
56f55f8b58a02e95b401cb50df05086cabeaeeb5 |
|
25-Sep-2008 |
Russell King <rmk@dyn-67.arm.linux.org.uk> |
[ARM] dma: provide a better dma_map_page() implementation We can translate a struct page directly to a DMA address using page_to_dma(). No need to use page_address() followed by virt_to_dma(). Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
afd1a321c49a250dab97cef6f2d3c3c9b9d0174a |
|
25-Sep-2008 |
Russell King <rmk@dyn-67.arm.linux.org.uk> |
[ARM] Update dma_map_sg()/dma_unmap_sg() API Update the ARM DMA scatter gather APIs for the scatterlist changes. Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
9dd428680573d7867ee5e40fa3f059a98301d416 |
|
10-Aug-2008 |
Russell King <rmk@dyn-67.arm.linux.org.uk> |
[ARM] dma-mapping: provide sync_range APIs Convert the existing dma_sync_single_for_* APIs to the new range based APIs, and make the dma_sync_single_for_* API a superset of it. Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
98ed7d4b1a4eebc1ac25929b6968673bef4d54c3 |
|
10-Aug-2008 |
Russell King <rmk@dyn-67.arm.linux.org.uk> |
[ARM] dma-mapping: improve type-safeness of DMA translations OMAP at least gets the return type(s) for the DMA translation functions wrong, which can lead to subtle errors. Avoid this by moving the DMA translation functions to asm/dma-mapping.h, and converting them to inline functions. Fix the OMAP DMA translation macros to use the correct argument and result types. Also, remove the unnecessary casts in dmabounce.c. Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
4baa9922430662431231ac637adedddbb0cfb2d7 |
|
02-Aug-2008 |
Russell King <rmk@dyn-67.arm.linux.org.uk> |
[ARM] move include/asm-arm to arch/arm/include/asm Move platform independent header files to arch/arm/include/asm, leaving those in asm/arch* and asm/plat* alone. Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|