Cloud platforms provide flexibility of creating Spark clusters for short duration either as Function as a server or on Kubernetes. But, it is important to ensure that the spark jobs are written and executed in a way as to get the best mileage from these clusters. Given that the Clusters are billed on the duration of time used, performance of jobs becomes critical not only to improve efficiency but to reduce the Total Cost. This talk covers practical tips and hints on how to tune Spark jobs on popular Resource Managers like Yarn,Kubernetes for best performance. This talk will also address some of the FAQs by users and customers on Spark Clusters on popular Cloud platforms.