Skip to content

Commit 4a25f2e

Browse files
ashishmhetre8willdeacon
authored andcommitted
iommu: arm-smmu: disable large page mappings for Nvidia arm-smmu
Tegra194 and Tegra234 SoCs have the erratum that causes walk cache entries to not be invalidated correctly. The problem is that the walk cache index generated for IOVA is not same across translation and invalidation requests. This is leading to page faults when PMD entry is released during unmap and populated with new PTE table during subsequent map request. Disabling large page mappings avoids the release of PMD entry and avoid translations seeing stale PMD entry in walk cache. Fix this by limiting the page mappings to PAGE_SIZE for Tegra194 and Tegra234 devices. This is recommended fix from Tegra hardware design team. Acked-by: Robin Murphy <[email protected]> Reviewed-by: Krishna Reddy <[email protected]> Co-developed-by: Pritesh Raithatha <[email protected]> Signed-off-by: Pritesh Raithatha <[email protected]> Signed-off-by: Ashish Mhetre <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Will Deacon <[email protected]>
1 parent 95d4782 commit 4a25f2e

File tree

1 file changed

+30
-0
lines changed

1 file changed

+30
-0
lines changed

drivers/iommu/arm/arm-smmu/arm-smmu-nvidia.c

Lines changed: 30 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -258,6 +258,34 @@ static void nvidia_smmu_probe_finalize(struct arm_smmu_device *smmu, struct devi
258258
dev_name(dev), err);
259259
}
260260

261+
static int nvidia_smmu_init_context(struct arm_smmu_domain *smmu_domain,
262+
struct io_pgtable_cfg *pgtbl_cfg,
263+
struct device *dev)
264+
{
265+
struct arm_smmu_device *smmu = smmu_domain->smmu;
266+
const struct device_node *np = smmu->dev->of_node;
267+
268+
/*
269+
* Tegra194 and Tegra234 SoCs have the erratum that causes walk cache
270+
* entries to not be invalidated correctly. The problem is that the walk
271+
* cache index generated for IOVA is not same across translation and
272+
* invalidation requests. This is leading to page faults when PMD entry
273+
* is released during unmap and populated with new PTE table during
274+
* subsequent map request. Disabling large page mappings avoids the
275+
* release of PMD entry and avoid translations seeing stale PMD entry in
276+
* walk cache.
277+
* Fix this by limiting the page mappings to PAGE_SIZE on Tegra194 and
278+
* Tegra234.
279+
*/
280+
if (of_device_is_compatible(np, "nvidia,tegra234-smmu") ||
281+
of_device_is_compatible(np, "nvidia,tegra194-smmu")) {
282+
smmu->pgsize_bitmap = PAGE_SIZE;
283+
pgtbl_cfg->pgsize_bitmap = smmu->pgsize_bitmap;
284+
}
285+
286+
return 0;
287+
}
288+
261289
static const struct arm_smmu_impl nvidia_smmu_impl = {
262290
.read_reg = nvidia_smmu_read_reg,
263291
.write_reg = nvidia_smmu_write_reg,
@@ -268,10 +296,12 @@ static const struct arm_smmu_impl nvidia_smmu_impl = {
268296
.global_fault = nvidia_smmu_global_fault,
269297
.context_fault = nvidia_smmu_context_fault,
270298
.probe_finalize = nvidia_smmu_probe_finalize,
299+
.init_context = nvidia_smmu_init_context,
271300
};
272301

273302
static const struct arm_smmu_impl nvidia_smmu_single_impl = {
274303
.probe_finalize = nvidia_smmu_probe_finalize,
304+
.init_context = nvidia_smmu_init_context,
275305
};
276306

277307
struct arm_smmu_device *nvidia_smmu_impl_init(struct arm_smmu_device *smmu)

0 commit comments

Comments
 (0)