public class FederationInterceptorREST extends AbstractRESTRequestInterceptor
AbstractRESTRequestInterceptor
class and provides an
implementation for federation of YARN RM and scaling an application across
multiple YARN SubClusters. All the federation specific implementation is
encapsulated in this class. This is always the last interceptor in the chain.Constructor and Description |
---|
FederationInterceptorREST() |
Modifier and Type | Method and Description |
---|---|
javax.ws.rs.core.Response |
addToClusterNodeLabels(org.apache.hadoop.yarn.server.resourcemanager.webapp.dao.NodeLabelsInfo newNodeLabels,
javax.servlet.http.HttpServletRequest hsr)
This method adds specific node labels for specific nodes, and it is
reachable by using
RMWSConsts.ADD_NODE_LABELS . |
javax.ws.rs.core.Response |
cancelDelegationToken(javax.servlet.http.HttpServletRequest hsr)
Cancel DelegationToken.
|
org.apache.hadoop.yarn.server.resourcemanager.webapp.dao.RMQueueAclInfo |
checkUserAccessToQueue(String queue,
String username,
String queueAclType,
javax.servlet.http.HttpServletRequest hsr) |
javax.ws.rs.core.Response |
createNewApplication(javax.servlet.http.HttpServletRequest hsr)
YARN Router forwards every getNewApplication requests to any RM.
|
javax.ws.rs.core.Response |
createNewReservation(javax.servlet.http.HttpServletRequest hsr) |
javax.ws.rs.core.Response |
deleteReservation(org.apache.hadoop.yarn.server.resourcemanager.webapp.dao.ReservationDeleteRequestInfo resContext,
javax.servlet.http.HttpServletRequest hsr) |
String |
dumpSchedulerLogs(String time,
javax.servlet.http.HttpServletRequest hsr)
This method dumps the scheduler logs for the time got in input, and it is
reachable by using
RMWSConsts.SCHEDULER_LOGS . |
org.apache.hadoop.yarn.server.resourcemanager.webapp.dao.ClusterInfo |
get() |
org.apache.hadoop.yarn.server.resourcemanager.webapp.dao.ActivitiesInfo |
getActivities(javax.servlet.http.HttpServletRequest hsr,
String nodeId,
String groupBy)
This method retrieve all the activities in a specific node, and it is
reachable by using
RMWSConsts.SCHEDULER_ACTIVITIES . |
org.apache.hadoop.yarn.server.resourcemanager.webapp.dao.AppInfo |
getApp(javax.servlet.http.HttpServletRequest hsr,
String appId,
Set<String> unselectedFields)
The YARN Router will forward to the respective YARN RM in which the AM is
running.
|
org.apache.hadoop.yarn.server.resourcemanager.webapp.dao.AppActivitiesInfo |
getAppActivities(javax.servlet.http.HttpServletRequest hsr,
String appId,
String time,
Set<String> requestPriorities,
Set<String> allocationRequestIds,
String groupBy,
String limit,
Set<String> actions,
boolean summarize) |
org.apache.hadoop.yarn.server.webapp.dao.AppAttemptInfo |
getAppAttempt(javax.servlet.http.HttpServletRequest req,
javax.servlet.http.HttpServletResponse res,
String appId,
String appAttemptId) |
org.apache.hadoop.yarn.server.resourcemanager.webapp.dao.AppAttemptsInfo |
getAppAttempts(javax.servlet.http.HttpServletRequest hsr,
String appId) |
org.apache.hadoop.yarn.util.LRUCacheHashMap<RouterAppInfoCacheKey,org.apache.hadoop.yarn.server.resourcemanager.webapp.dao.AppsInfo> |
getAppInfosCaches() |
org.apache.hadoop.yarn.server.resourcemanager.webapp.dao.AppPriority |
getAppPriority(javax.servlet.http.HttpServletRequest hsr,
String appId) |
org.apache.hadoop.yarn.server.resourcemanager.webapp.dao.AppQueue |
getAppQueue(javax.servlet.http.HttpServletRequest hsr,
String appId) |
org.apache.hadoop.yarn.server.resourcemanager.webapp.dao.AppsInfo |
getApps(javax.servlet.http.HttpServletRequest hsr,
String stateQuery,
Set<String> statesQuery,
String finalStatusQuery,
String userQuery,
String queueQuery,
String count,
String startedBegin,
String startedEnd,
String finishBegin,
String finishEnd,
Set<String> applicationTypes,
Set<String> applicationTags,
String name,
Set<String> unselectedFields)
The YARN Router will forward the request to all the YARN RMs in parallel,
after that it will group all the ApplicationReports by the ApplicationId.
|
org.apache.hadoop.yarn.server.resourcemanager.webapp.dao.AppState |
getAppState(javax.servlet.http.HttpServletRequest hsr,
String appId)
The YARN Router will forward to the respective YARN RM in which the AM is
running.
|
org.apache.hadoop.yarn.server.resourcemanager.webapp.dao.ApplicationStatisticsInfo |
getAppStatistics(javax.servlet.http.HttpServletRequest hsr,
Set<String> stateQueries,
Set<String> typeQueries) |
org.apache.hadoop.yarn.server.resourcemanager.webapp.dao.AppTimeoutInfo |
getAppTimeout(javax.servlet.http.HttpServletRequest hsr,
String appId,
String type) |
org.apache.hadoop.yarn.server.resourcemanager.webapp.dao.AppTimeoutsInfo |
getAppTimeouts(javax.servlet.http.HttpServletRequest hsr,
String appId) |
org.apache.hadoop.yarn.server.resourcemanager.webapp.dao.BulkActivitiesInfo |
getBulkActivities(javax.servlet.http.HttpServletRequest hsr,
String groupBy,
int activitiesCount)
This method retrieve the last n activities inside scheduler, and it is
reachable by using
RMWSConsts.SCHEDULER_BULK_ACTIVITIES . |
org.apache.hadoop.yarn.server.resourcemanager.webapp.dao.ClusterInfo |
getClusterInfo()
This method retrieves the cluster information, and it is reachable by using
RMWSConsts.INFO . |
org.apache.hadoop.yarn.server.resourcemanager.webapp.dao.ClusterMetricsInfo |
getClusterMetricsInfo() |
org.apache.hadoop.yarn.server.resourcemanager.webapp.dao.NodeLabelsInfo |
getClusterNodeLabels(javax.servlet.http.HttpServletRequest hsr) |
org.apache.hadoop.yarn.server.resourcemanager.webapp.dao.ClusterUserInfo |
getClusterUserInfo(javax.servlet.http.HttpServletRequest hsr)
This method retrieves the cluster user information, and it is reachable by using
RMWSConsts.CLUSTER_USER_INFO . |
org.apache.hadoop.yarn.server.webapp.dao.ContainerInfo |
getContainer(javax.servlet.http.HttpServletRequest req,
javax.servlet.http.HttpServletResponse res,
String appId,
String appAttemptId,
String containerId) |
org.apache.hadoop.yarn.server.webapp.dao.ContainersInfo |
getContainers(javax.servlet.http.HttpServletRequest req,
javax.servlet.http.HttpServletResponse res,
String appId,
String appAttemptId) |
protected DefaultRequestInterceptorREST |
getInterceptorForSubCluster(org.apache.hadoop.yarn.server.federation.store.records.SubClusterId subClusterId) |
Map<org.apache.hadoop.yarn.server.federation.store.records.SubClusterId,DefaultRequestInterceptorREST> |
getInterceptors() |
org.apache.hadoop.yarn.server.resourcemanager.webapp.dao.NodeLabelsInfo |
getLabelsOnNode(javax.servlet.http.HttpServletRequest hsr,
String nodeId) |
org.apache.hadoop.yarn.server.resourcemanager.webapp.dao.LabelsToNodesInfo |
getLabelsToNodes(Set<String> labels) |
org.apache.hadoop.yarn.server.resourcemanager.webapp.dao.NodeInfo |
getNode(String nodeId)
The YARN Router will forward to the request to all the SubClusters to find
where the node is running.
|
org.apache.hadoop.yarn.server.resourcemanager.webapp.dao.NodesInfo |
getNodes(String states)
The YARN Router will forward the request to all the YARN RMs in parallel,
after that it will remove all the duplicated NodeInfo by using the NodeId.
|
org.apache.hadoop.yarn.server.resourcemanager.webapp.dao.NodeToLabelsInfo |
getNodeToLabels(javax.servlet.http.HttpServletRequest hsr) |
protected DefaultRequestInterceptorREST |
getOrCreateInterceptorByAppId(String appId) |
protected DefaultRequestInterceptorREST |
getOrCreateInterceptorByNodeId(String nodeId) |
protected DefaultRequestInterceptorREST |
getOrCreateInterceptorForSubCluster(org.apache.hadoop.yarn.server.federation.store.records.SubClusterId subClusterId,
String webAppAddress) |
protected DefaultRequestInterceptorREST |
getOrCreateInterceptorForSubCluster(org.apache.hadoop.yarn.server.federation.store.records.SubClusterInfo subClusterInfo) |
org.apache.hadoop.yarn.server.resourcemanager.webapp.dao.NodeLabelsInfo |
getRMNodeLabels(javax.servlet.http.HttpServletRequest hsr) |
javax.ws.rs.core.Response |
getSchedulerConfiguration(javax.servlet.http.HttpServletRequest hsr)
This method retrieves all the Scheduler configuration, and it is reachable
by using
RMWSConsts.SCHEDULER_CONF . |
org.apache.hadoop.yarn.server.resourcemanager.webapp.dao.SchedulerTypeInfo |
getSchedulerInfo()
This method retrieves the current scheduler status, and it is reachable by
using
RMWSConsts.SCHEDULER . |
void |
init(String user)
Initializes the
RESTRequestInterceptor . |
Map<org.apache.hadoop.yarn.server.federation.store.records.SubClusterInfo,org.apache.hadoop.yarn.server.resourcemanager.webapp.dao.NodesInfo> |
invokeConcurrentGetNodeLabel() |
javax.ws.rs.core.Response |
listReservation(String queue,
String reservationId,
long startTime,
long endTime,
boolean includeResourceAllocations,
javax.servlet.http.HttpServletRequest hsr) |
javax.ws.rs.core.Response |
postDelegationToken(org.apache.hadoop.yarn.server.resourcemanager.webapp.dao.DelegationToken tokenData,
javax.servlet.http.HttpServletRequest hsr)
This method posts a delegation token from the client.
|
javax.ws.rs.core.Response |
postDelegationTokenExpiration(javax.servlet.http.HttpServletRequest hsr)
This method updates the expiration for a delegation token from the client.
|
javax.ws.rs.core.Response |
removeFromClusterNodeLabels(Set<String> oldNodeLabels,
javax.servlet.http.HttpServletRequest hsr)
This method removes all the node labels for specific nodes, and it is
reachable by using
RMWSConsts.REMOVE_NODE_LABELS . |
javax.ws.rs.core.Response |
replaceLabelsOnNode(Set<String> newNodeLabelsName,
javax.servlet.http.HttpServletRequest hsr,
String nodeId)
This method replaces all the node labels for specific node, and it is
reachable by using
RMWSConsts.NODES_NODEID_REPLACE_LABELS . |
javax.ws.rs.core.Response |
replaceLabelsOnNodes(org.apache.hadoop.yarn.server.resourcemanager.webapp.dao.NodeToLabelsEntryList newNodeToLabels,
javax.servlet.http.HttpServletRequest hsr)
This method replaces all the node labels for specific nodes, and it is
reachable by using
RMWSConsts.REPLACE_NODE_TO_LABELS . |
void |
setAllowPartialResult(boolean allowPartialResult) |
void |
setNextInterceptor(RESTRequestInterceptor next)
Sets the
RESTRequestInterceptor in the chain. |
void |
shutdown()
Disposes the
RESTRequestInterceptor . |
javax.ws.rs.core.Response |
signalToContainer(String containerId,
String command,
javax.servlet.http.HttpServletRequest req) |
javax.ws.rs.core.Response |
submitApplication(org.apache.hadoop.yarn.server.resourcemanager.webapp.dao.ApplicationSubmissionContextInfo newApp,
javax.servlet.http.HttpServletRequest hsr)
Today, in YARN there are no checks of any applicationId submitted.
|
javax.ws.rs.core.Response |
submitReservation(org.apache.hadoop.yarn.server.resourcemanager.webapp.dao.ReservationSubmissionRequestInfo resContext,
javax.servlet.http.HttpServletRequest hsr) |
javax.ws.rs.core.Response |
updateApplicationPriority(org.apache.hadoop.yarn.server.resourcemanager.webapp.dao.AppPriority targetPriority,
javax.servlet.http.HttpServletRequest hsr,
String appId) |
javax.ws.rs.core.Response |
updateApplicationTimeout(org.apache.hadoop.yarn.server.resourcemanager.webapp.dao.AppTimeoutInfo appTimeout,
javax.servlet.http.HttpServletRequest hsr,
String appId) |
javax.ws.rs.core.Response |
updateAppQueue(org.apache.hadoop.yarn.server.resourcemanager.webapp.dao.AppQueue targetQueue,
javax.servlet.http.HttpServletRequest hsr,
String appId) |
javax.ws.rs.core.Response |
updateAppState(org.apache.hadoop.yarn.server.resourcemanager.webapp.dao.AppState targetState,
javax.servlet.http.HttpServletRequest hsr,
String appId)
The YARN Router will forward to the respective YARN RM in which the AM is
running.
|
org.apache.hadoop.yarn.server.resourcemanager.webapp.dao.ResourceInfo |
updateNodeResource(javax.servlet.http.HttpServletRequest hsr,
String nodeId,
org.apache.hadoop.yarn.server.resourcemanager.webapp.dao.ResourceOptionInfo resourceOption)
This method changes the resources of a specific node, and it is reachable
by using
RMWSConsts.NODE_RESOURCE . |
javax.ws.rs.core.Response |
updateReservation(org.apache.hadoop.yarn.server.resourcemanager.webapp.dao.ReservationUpdateRequestInfo resContext,
javax.servlet.http.HttpServletRequest hsr) |
javax.ws.rs.core.Response |
updateSchedulerConfiguration(org.apache.hadoop.yarn.webapp.dao.SchedConfUpdateInfo mutationInfo,
javax.servlet.http.HttpServletRequest hsr)
This method updates the Scheduler configuration, and it is reachable by
using
RMWSConsts.SCHEDULER_CONF . |
getConf, getNextInterceptor, getRouterClientRMService, getUser, setConf, setRouterClientRMService
public void init(String user)
AbstractRESTRequestInterceptor
RESTRequestInterceptor
.init
in interface RESTRequestInterceptor
init
in class AbstractRESTRequestInterceptor
user
- the name of the client@VisibleForTesting protected DefaultRequestInterceptorREST getInterceptorForSubCluster(org.apache.hadoop.yarn.server.federation.store.records.SubClusterId subClusterId)
protected DefaultRequestInterceptorREST getOrCreateInterceptorForSubCluster(org.apache.hadoop.yarn.server.federation.store.records.SubClusterInfo subClusterInfo)
protected DefaultRequestInterceptorREST getOrCreateInterceptorByAppId(String appId) throws org.apache.hadoop.yarn.exceptions.YarnException
org.apache.hadoop.yarn.exceptions.YarnException
protected DefaultRequestInterceptorREST getOrCreateInterceptorByNodeId(String nodeId)
@VisibleForTesting protected DefaultRequestInterceptorREST getOrCreateInterceptorForSubCluster(org.apache.hadoop.yarn.server.federation.store.records.SubClusterId subClusterId, String webAppAddress)
public javax.ws.rs.core.Response createNewApplication(javax.servlet.http.HttpServletRequest hsr) throws org.apache.hadoop.security.authorize.AuthorizationException, IOException, InterruptedException
Possible failures and behaviors:
Client: identical behavior as RMWebServices
.
Router: the Client will timeout and resubmit.
ResourceManager: the Router will timeout and contacts another RM.
StateStore: not in the execution.
org.apache.hadoop.security.authorize.AuthorizationException
IOException
InterruptedException
public javax.ws.rs.core.Response submitApplication(org.apache.hadoop.yarn.server.resourcemanager.webapp.dao.ApplicationSubmissionContextInfo newApp, javax.servlet.http.HttpServletRequest hsr) throws org.apache.hadoop.security.authorize.AuthorizationException, IOException, InterruptedException
Base scenarios:
The Client submits an application to the Router. The Router selects one SubCluster to forward the request. The Router inserts a tuple into StateStore with the selected SubCluster (e.g. SC1) and the appId. The State Store replies with the selected SubCluster (e.g. SC1). The Router submits the request to the selected SubCluster.
In case of State Store failure:
The client submits an application to the Router. The Router selects one SubCluster to forward the request. The Router inserts a tuple into State Store with the selected SubCluster (e.g. SC1) and the appId. Due to the State Store down the Router times out and it will retry depending on the FederationFacade settings. The Router replies to the client with an error message.
If State Store fails after inserting the tuple: identical behavior as
RMWebServices
.
In case of Router failure:
Scenario 1 – Crash before submission to the ResourceManager
The Client submits an application to the Router. The Router selects one SubCluster to forward the request. The Router inserts a tuple into State Store with the selected SubCluster (e.g. SC1) and the appId. The Router crashes. The Client timeouts and resubmits the application. The Router selects one SubCluster to forward the request. The Router inserts a tuple into State Store with the selected SubCluster (e.g. SC2) and the appId. Because the tuple is already inserted in the State Store, it returns the previous selected SubCluster (e.g. SC1). The Router submits the request to the selected SubCluster (e.g. SC1).
Scenario 2 – Crash after submission to the ResourceManager
The Client submits an application to the Router. The Router selects one SubCluster to forward the request. The Router inserts a tuple into State Store with the selected SubCluster (e.g. SC1) and the appId. The Router submits the request to the selected SubCluster. The Router crashes. The Client timeouts and resubmit the application. The Router selects one SubCluster to forward the request. The Router inserts a tuple into State Store with the selected SubCluster (e.g. SC2) and the appId. The State Store replies with the selected SubCluster (e.g. SC1). The Router submits the request to the selected SubCluster (e.g. SC1). When a client re-submits the same application to the same RM, it does not raise an exception and replies with operation successful message.
In case of Client failure: identical behavior as RMWebServices
.
In case of ResourceManager failure:
The Client submits an application to the Router. The Router selects one SubCluster to forward the request. The Router inserts a tuple into State Store with the selected SubCluster (e.g. SC1) and the appId. The Router submits the request to the selected SubCluster. The entire SubCluster is down – all the RMs in HA or the master RM is not reachable. The Router times out. The Router selects a new SubCluster to forward the request. The Router update a tuple into State Store with the selected SubCluster (e.g. SC2) and the appId. The State Store replies with OK answer. The Router submits the request to the selected SubCluster (e.g. SC2).
org.apache.hadoop.security.authorize.AuthorizationException
IOException
InterruptedException
public org.apache.hadoop.yarn.server.resourcemanager.webapp.dao.AppInfo getApp(javax.servlet.http.HttpServletRequest hsr, String appId, Set<String> unselectedFields)
Possible failure:
Client: identical behavior as RMWebServices
.
Router: the Client will timeout and resubmit the request.
ResourceManager: the Router will timeout and the call will fail.
State Store: the Router will timeout and it will retry depending on the FederationFacade settings - if the failure happened before the select operation.
public javax.ws.rs.core.Response updateAppState(org.apache.hadoop.yarn.server.resourcemanager.webapp.dao.AppState targetState, javax.servlet.http.HttpServletRequest hsr, String appId) throws org.apache.hadoop.security.authorize.AuthorizationException, org.apache.hadoop.yarn.exceptions.YarnException, InterruptedException, IOException
Possible failures and behaviors:
Client: identical behavior as RMWebServices
.
Router: the Client will timeout and resubmit the request.
ResourceManager: the Router will timeout and the call will fail.
State Store: the Router will timeout and it will retry depending on the FederationFacade settings - if the failure happened before the select operation.
org.apache.hadoop.security.authorize.AuthorizationException
org.apache.hadoop.yarn.exceptions.YarnException
InterruptedException
IOException
public org.apache.hadoop.yarn.server.resourcemanager.webapp.dao.AppsInfo getApps(javax.servlet.http.HttpServletRequest hsr, String stateQuery, Set<String> statesQuery, String finalStatusQuery, String userQuery, String queueQuery, String count, String startedBegin, String startedEnd, String finishBegin, String finishEnd, Set<String> applicationTypes, Set<String> applicationTags, String name, Set<String> unselectedFields)
Possible failure:
Client: identical behavior as RMWebServices
.
Router: the Client will timeout and resubmit the request.
ResourceManager: the Router calls each YARN RM in parallel by using one thread for each YARN RM. In case a YARN RM fails, a single call will timeout. However, the Router will merge the ApplicationReports it got, and provides a partial list to the client.
State Store: the Router will timeout and it will retry depending on the FederationFacade settings - if the failure happened before the select operation.
public org.apache.hadoop.yarn.server.resourcemanager.webapp.dao.NodeInfo getNode(String nodeId)
Possible failure:
Client: identical behavior as RMWebServices
.
Router: the Client will timeout and resubmit the request.
ResourceManager: the Router will timeout and the call will fail.
State Store: the Router will timeout and it will retry depending on the FederationFacade settings - if the failure happened before the select operation.
public org.apache.hadoop.yarn.server.resourcemanager.webapp.dao.NodesInfo getNodes(String states)
Possible failure:
Client: identical behavior as RMWebServices
.
Router: the Client will timeout and resubmit the request.
ResourceManager: the Router calls each YARN RM in parallel by using one thread for each YARN RM. In case a YARN RM fails, a single call will timeout. However, the Router will use the NodesInfo it got, and provides a partial list to the client.
State Store: the Router will timeout and it will retry depending on the FederationFacade settings - if the failure happened before the select operation.
public org.apache.hadoop.yarn.server.resourcemanager.webapp.dao.ResourceInfo updateNodeResource(javax.servlet.http.HttpServletRequest hsr, String nodeId, org.apache.hadoop.yarn.server.resourcemanager.webapp.dao.ResourceOptionInfo resourceOption)
RMWSConsts.NODE_RESOURCE
.hsr
- The servlet request.nodeId
- The node we want to retrieve the information for.
It is a PathParam.resourceOption
- The resource change.public org.apache.hadoop.yarn.server.resourcemanager.webapp.dao.ClusterMetricsInfo getClusterMetricsInfo()
public org.apache.hadoop.yarn.server.resourcemanager.webapp.dao.AppState getAppState(javax.servlet.http.HttpServletRequest hsr, String appId) throws org.apache.hadoop.security.authorize.AuthorizationException
Possible failure:
Client: identical behavior as RMWebServices
.
Router: the Client will timeout and resubmit the request.
ResourceManager: the Router will timeout and the call will fail.
State Store: the Router will timeout and it will retry depending on the FederationFacade settings - if the failure happened before the select operation.
org.apache.hadoop.security.authorize.AuthorizationException
public org.apache.hadoop.yarn.server.resourcemanager.webapp.dao.ClusterInfo get()
public org.apache.hadoop.yarn.server.resourcemanager.webapp.dao.ClusterInfo getClusterInfo()
RMWSConsts.INFO
.
In Federation mode, we will return a FederationClusterInfo object,
which contains a set of ClusterInfo.public org.apache.hadoop.yarn.server.resourcemanager.webapp.dao.ClusterUserInfo getClusterUserInfo(javax.servlet.http.HttpServletRequest hsr)
RMWSConsts.CLUSTER_USER_INFO
.
In Federation mode, we will return a ClusterUserInfo object,
which contains a set of ClusterUserInfo.hsr
- the servlet requestpublic org.apache.hadoop.yarn.server.resourcemanager.webapp.dao.SchedulerTypeInfo getSchedulerInfo()
RMWSConsts.SCHEDULER
.
For the federation mode, the SchedulerType information of the cluster
cannot be integrated and displayed, and the specific cluster information needs to be marked.public String dumpSchedulerLogs(String time, javax.servlet.http.HttpServletRequest hsr) throws IOException
RMWSConsts.SCHEDULER_LOGS
.time
- the period of time. It is a FormParam.hsr
- the servlet requestIOException
- when it cannot create dump log filepublic org.apache.hadoop.yarn.server.resourcemanager.webapp.dao.ActivitiesInfo getActivities(javax.servlet.http.HttpServletRequest hsr, String nodeId, String groupBy)
RMWSConsts.SCHEDULER_ACTIVITIES
.hsr
- the servlet requestnodeId
- the node we want to retrieve the activities. It is a
QueryParam.groupBy
- the groupBy type by which the activities should be
aggregated. It is a QueryParam.public org.apache.hadoop.yarn.server.resourcemanager.webapp.dao.BulkActivitiesInfo getBulkActivities(javax.servlet.http.HttpServletRequest hsr, String groupBy, int activitiesCount) throws InterruptedException
RMWSConsts.SCHEDULER_BULK_ACTIVITIES
.hsr
- the servlet requestgroupBy
- the groupBy type by which the activities should be
aggregated. It is a QueryParam.activitiesCount
- number of activitiesInterruptedException
public org.apache.hadoop.yarn.server.resourcemanager.webapp.dao.AppActivitiesInfo getAppActivities(javax.servlet.http.HttpServletRequest hsr, String appId, String time, Set<String> requestPriorities, Set<String> allocationRequestIds, String groupBy, String limit, Set<String> actions, boolean summarize)
public org.apache.hadoop.yarn.server.resourcemanager.webapp.dao.ApplicationStatisticsInfo getAppStatistics(javax.servlet.http.HttpServletRequest hsr, Set<String> stateQueries, Set<String> typeQueries)
public org.apache.hadoop.yarn.server.resourcemanager.webapp.dao.NodeToLabelsInfo getNodeToLabels(javax.servlet.http.HttpServletRequest hsr) throws IOException
IOException
public org.apache.hadoop.yarn.server.resourcemanager.webapp.dao.NodeLabelsInfo getRMNodeLabels(javax.servlet.http.HttpServletRequest hsr) throws IOException
IOException
public org.apache.hadoop.yarn.server.resourcemanager.webapp.dao.LabelsToNodesInfo getLabelsToNodes(Set<String> labels) throws IOException
IOException
public javax.ws.rs.core.Response replaceLabelsOnNodes(org.apache.hadoop.yarn.server.resourcemanager.webapp.dao.NodeToLabelsEntryList newNodeToLabels, javax.servlet.http.HttpServletRequest hsr) throws IOException
RMWSConsts.REPLACE_NODE_TO_LABELS
.newNodeToLabels
- the list of new labels. It is a content param.hsr
- the servlet requestIOException
- if an exception happenedResourceManagerAdministrationProtocol.replaceLabelsOnNode(org.apache.hadoop.yarn.server.api.protocolrecords.ReplaceLabelsOnNodeRequest)
public javax.ws.rs.core.Response replaceLabelsOnNode(Set<String> newNodeLabelsName, javax.servlet.http.HttpServletRequest hsr, String nodeId) throws Exception
RMWSConsts.NODES_NODEID_REPLACE_LABELS
.newNodeLabelsName
- the list of new labels. It is a QueryParam.hsr
- the servlet requestnodeId
- the node we want to replace the node labels. It is a
PathParam.Exception
- if an exception happenedResourceManagerAdministrationProtocol.replaceLabelsOnNode(org.apache.hadoop.yarn.server.api.protocolrecords.ReplaceLabelsOnNodeRequest)
public org.apache.hadoop.yarn.server.resourcemanager.webapp.dao.NodeLabelsInfo getClusterNodeLabels(javax.servlet.http.HttpServletRequest hsr) throws IOException
IOException
public javax.ws.rs.core.Response addToClusterNodeLabels(org.apache.hadoop.yarn.server.resourcemanager.webapp.dao.NodeLabelsInfo newNodeLabels, javax.servlet.http.HttpServletRequest hsr) throws Exception
RMWSConsts.ADD_NODE_LABELS
.newNodeLabels
- the node labels to add. It is a content param.hsr
- the servlet requestException
- in case of bad requestResourceManagerAdministrationProtocol.addToClusterNodeLabels(org.apache.hadoop.yarn.server.api.protocolrecords.AddToClusterNodeLabelsRequest)
public javax.ws.rs.core.Response removeFromClusterNodeLabels(Set<String> oldNodeLabels, javax.servlet.http.HttpServletRequest hsr) throws Exception
RMWSConsts.REMOVE_NODE_LABELS
.oldNodeLabels
- the node labels to remove. It is a QueryParam.hsr
- the servlet requestException
- in case of bad requestResourceManagerAdministrationProtocol.removeFromClusterNodeLabels(org.apache.hadoop.yarn.server.api.protocolrecords.RemoveFromClusterNodeLabelsRequest)
public org.apache.hadoop.yarn.server.resourcemanager.webapp.dao.NodeLabelsInfo getLabelsOnNode(javax.servlet.http.HttpServletRequest hsr, String nodeId) throws IOException
IOException
public org.apache.hadoop.yarn.server.resourcemanager.webapp.dao.AppPriority getAppPriority(javax.servlet.http.HttpServletRequest hsr, String appId) throws org.apache.hadoop.security.authorize.AuthorizationException
org.apache.hadoop.security.authorize.AuthorizationException
public javax.ws.rs.core.Response updateApplicationPriority(org.apache.hadoop.yarn.server.resourcemanager.webapp.dao.AppPriority targetPriority, javax.servlet.http.HttpServletRequest hsr, String appId) throws org.apache.hadoop.security.authorize.AuthorizationException, org.apache.hadoop.yarn.exceptions.YarnException, InterruptedException, IOException
org.apache.hadoop.security.authorize.AuthorizationException
org.apache.hadoop.yarn.exceptions.YarnException
InterruptedException
IOException
public org.apache.hadoop.yarn.server.resourcemanager.webapp.dao.AppQueue getAppQueue(javax.servlet.http.HttpServletRequest hsr, String appId) throws org.apache.hadoop.security.authorize.AuthorizationException
org.apache.hadoop.security.authorize.AuthorizationException
public javax.ws.rs.core.Response updateAppQueue(org.apache.hadoop.yarn.server.resourcemanager.webapp.dao.AppQueue targetQueue, javax.servlet.http.HttpServletRequest hsr, String appId) throws org.apache.hadoop.security.authorize.AuthorizationException, org.apache.hadoop.yarn.exceptions.YarnException, InterruptedException, IOException
org.apache.hadoop.security.authorize.AuthorizationException
org.apache.hadoop.yarn.exceptions.YarnException
InterruptedException
IOException
public javax.ws.rs.core.Response postDelegationToken(org.apache.hadoop.yarn.server.resourcemanager.webapp.dao.DelegationToken tokenData, javax.servlet.http.HttpServletRequest hsr) throws org.apache.hadoop.security.authorize.AuthorizationException, IOException, InterruptedException, Exception
tokenData
- the token to delegate. It is a content param.hsr
- the servlet request.org.apache.hadoop.security.authorize.AuthorizationException
- if Kerberos auth failed.IOException
- if the delegation failed.InterruptedException
- if interrupted.Exception
- in case of bad request.public javax.ws.rs.core.Response postDelegationTokenExpiration(javax.servlet.http.HttpServletRequest hsr) throws org.apache.hadoop.security.authorize.AuthorizationException, IOException, InterruptedException, Exception
hsr
- the servlet requestorg.apache.hadoop.security.authorize.AuthorizationException
- if Kerberos auth failed.IOException
- if the delegation failed.InterruptedException
- if interrupted.Exception
- in case of bad request.public javax.ws.rs.core.Response cancelDelegationToken(javax.servlet.http.HttpServletRequest hsr) throws org.apache.hadoop.security.authorize.AuthorizationException, IOException, InterruptedException, Exception
hsr
- the servlet requestorg.apache.hadoop.security.authorize.AuthorizationException
- if Kerberos auth failed.IOException
- if the delegation failed.InterruptedException
- if interrupted.Exception
- in case of bad request.public javax.ws.rs.core.Response createNewReservation(javax.servlet.http.HttpServletRequest hsr) throws org.apache.hadoop.security.authorize.AuthorizationException, IOException, InterruptedException
org.apache.hadoop.security.authorize.AuthorizationException
IOException
InterruptedException
public javax.ws.rs.core.Response submitReservation(org.apache.hadoop.yarn.server.resourcemanager.webapp.dao.ReservationSubmissionRequestInfo resContext, javax.servlet.http.HttpServletRequest hsr) throws org.apache.hadoop.security.authorize.AuthorizationException, IOException, InterruptedException
org.apache.hadoop.security.authorize.AuthorizationException
IOException
InterruptedException
public javax.ws.rs.core.Response updateReservation(org.apache.hadoop.yarn.server.resourcemanager.webapp.dao.ReservationUpdateRequestInfo resContext, javax.servlet.http.HttpServletRequest hsr) throws org.apache.hadoop.security.authorize.AuthorizationException, IOException, InterruptedException
org.apache.hadoop.security.authorize.AuthorizationException
IOException
InterruptedException
public javax.ws.rs.core.Response deleteReservation(org.apache.hadoop.yarn.server.resourcemanager.webapp.dao.ReservationDeleteRequestInfo resContext, javax.servlet.http.HttpServletRequest hsr) throws org.apache.hadoop.security.authorize.AuthorizationException, IOException, InterruptedException
org.apache.hadoop.security.authorize.AuthorizationException
IOException
InterruptedException
public javax.ws.rs.core.Response listReservation(String queue, String reservationId, long startTime, long endTime, boolean includeResourceAllocations, javax.servlet.http.HttpServletRequest hsr) throws Exception
Exception
public org.apache.hadoop.yarn.server.resourcemanager.webapp.dao.AppTimeoutInfo getAppTimeout(javax.servlet.http.HttpServletRequest hsr, String appId, String type) throws org.apache.hadoop.security.authorize.AuthorizationException
org.apache.hadoop.security.authorize.AuthorizationException
public org.apache.hadoop.yarn.server.resourcemanager.webapp.dao.AppTimeoutsInfo getAppTimeouts(javax.servlet.http.HttpServletRequest hsr, String appId) throws org.apache.hadoop.security.authorize.AuthorizationException
org.apache.hadoop.security.authorize.AuthorizationException
public javax.ws.rs.core.Response updateApplicationTimeout(org.apache.hadoop.yarn.server.resourcemanager.webapp.dao.AppTimeoutInfo appTimeout, javax.servlet.http.HttpServletRequest hsr, String appId) throws org.apache.hadoop.security.authorize.AuthorizationException, org.apache.hadoop.yarn.exceptions.YarnException, InterruptedException, IOException
org.apache.hadoop.security.authorize.AuthorizationException
org.apache.hadoop.yarn.exceptions.YarnException
InterruptedException
IOException
public org.apache.hadoop.yarn.server.resourcemanager.webapp.dao.AppAttemptsInfo getAppAttempts(javax.servlet.http.HttpServletRequest hsr, String appId)
public org.apache.hadoop.yarn.server.resourcemanager.webapp.dao.RMQueueAclInfo checkUserAccessToQueue(String queue, String username, String queueAclType, javax.servlet.http.HttpServletRequest hsr) throws org.apache.hadoop.security.authorize.AuthorizationException
org.apache.hadoop.security.authorize.AuthorizationException
public org.apache.hadoop.yarn.server.webapp.dao.AppAttemptInfo getAppAttempt(javax.servlet.http.HttpServletRequest req, javax.servlet.http.HttpServletResponse res, String appId, String appAttemptId)
req
- the servlet requestres
- the servlet responseappId
- the application we want to get the appAttempt. It is a
PathParam.appAttemptId
- the AppAttempt we want to get the info. It is a
PathParam.WebServices.getAppAttempt(HttpServletRequest, HttpServletResponse,
String, String)
public org.apache.hadoop.yarn.server.webapp.dao.ContainersInfo getContainers(javax.servlet.http.HttpServletRequest req, javax.servlet.http.HttpServletResponse res, String appId, String appAttemptId)
req
- the servlet requestres
- the servlet responseappId
- the application we want to get the containers info. It is a
PathParam.appAttemptId
- the AppAttempt we want to get the info. It is a
PathParam.WebServices.getContainers(HttpServletRequest, HttpServletResponse,
String, String)
public org.apache.hadoop.yarn.server.webapp.dao.ContainerInfo getContainer(javax.servlet.http.HttpServletRequest req, javax.servlet.http.HttpServletResponse res, String appId, String appAttemptId, String containerId)
req
- the servlet requestres
- the servlet responseappId
- the application we want to get the containers info. It is a
PathParam.appAttemptId
- the AppAttempt we want to get the info. It is a
PathParam.containerId
- the container we want to get the info. It is a
PathParam.WebServices.getContainer(HttpServletRequest, HttpServletResponse,
String, String, String)
public javax.ws.rs.core.Response updateSchedulerConfiguration(org.apache.hadoop.yarn.webapp.dao.SchedConfUpdateInfo mutationInfo, javax.servlet.http.HttpServletRequest hsr) throws org.apache.hadoop.security.authorize.AuthorizationException, InterruptedException
RMWSConsts.SCHEDULER_CONF
.mutationInfo
- th information for making scheduler configuration
changes (supports adding, removing, or updating a queue, as well
as global scheduler conf changes)hsr
- the servlet requestorg.apache.hadoop.security.authorize.AuthorizationException
- if the user is not authorized to invoke this
methodInterruptedException
- if interruptedpublic javax.ws.rs.core.Response getSchedulerConfiguration(javax.servlet.http.HttpServletRequest hsr) throws org.apache.hadoop.security.authorize.AuthorizationException
RMWSConsts.SCHEDULER_CONF
.hsr
- the servlet requestorg.apache.hadoop.security.authorize.AuthorizationException
- if the user is not authorized to invoke this
method.public void setNextInterceptor(RESTRequestInterceptor next)
AbstractRESTRequestInterceptor
RESTRequestInterceptor
in the chain.setNextInterceptor
in interface RESTRequestInterceptor
setNextInterceptor
in class AbstractRESTRequestInterceptor
next
- the RESTRequestInterceptor to set in the pipelinepublic javax.ws.rs.core.Response signalToContainer(String containerId, String command, javax.servlet.http.HttpServletRequest req)
public void shutdown()
AbstractRESTRequestInterceptor
RESTRequestInterceptor
.shutdown
in interface RESTRequestInterceptor
shutdown
in class AbstractRESTRequestInterceptor
@VisibleForTesting public org.apache.hadoop.yarn.util.LRUCacheHashMap<RouterAppInfoCacheKey,org.apache.hadoop.yarn.server.resourcemanager.webapp.dao.AppsInfo> getAppInfosCaches()
@VisibleForTesting public Map<org.apache.hadoop.yarn.server.federation.store.records.SubClusterId,DefaultRequestInterceptorREST> getInterceptors()
public void setAllowPartialResult(boolean allowPartialResult)
@VisibleForTesting public Map<org.apache.hadoop.yarn.server.federation.store.records.SubClusterInfo,org.apache.hadoop.yarn.server.resourcemanager.webapp.dao.NodesInfo> invokeConcurrentGetNodeLabel() throws IOException, org.apache.hadoop.yarn.exceptions.YarnException
IOException
org.apache.hadoop.yarn.exceptions.YarnException
Copyright © 2008–2024 Apache Software Foundation. All rights reserved.