hdfs ranger group 用户组维护
hdfs的用户和用户组关系其实是有缓存的, 使用google的guava cache缓存, 默认缓存时间是300秒.
一个用户可以归属于多个用户组. hdfs在获取用户信息的时候, 通常一并把用户组列表也获取到.
在做权限判断的时候, 只需要关注用户和用户组的权限, 不再需要关注用户和用户组的关系.
package org.apache.hadoop.security;
/**
* A user-to-groups mapping service.
*
* {@link Groups} allows for server to get the various group memberships
* of a given user via the {@link #getGroups(String)} call, thus ensuring
* a consistent user-to-groups mapping and protects against vagaries of
* different mappings on servers and clients in a Hadoop cluster.
*/
@InterfaceAudience.LimitedPrivate({"HDFS", "MapReduce"})
@InterfaceStability.Evolving
public class Groups {
/**
* Deals with loading data into the cache.
*/
private class GroupCacheLoader extends CacheLoader<String, List<String>> {
private ListeningExecutorService executorService;
GroupCacheLoader() {
if (reloadGroupsInBackground) {
ThreadFactory threadFactory = new ThreadFactoryBuilder()
.setNameFormat("Group-Cache-Reload")
.setDaemon(true)
.build();
// With coreThreadCount == maxThreadCount we effectively
// create a fixed size thread pool. As allowCoreThreadTimeOut
// has been set, all threads will die after 60 seconds of non use
ThreadPoolExecutor parentExecutor = new ThreadPoolExecutor(
reloadGroupsThreadCount,
reloadGroupsThreadCount,
60,
TimeUnit.SECONDS,
new LinkedBlockingQueue<>(),
threadFactory);
parentExecutor.allowCoreThreadTimeOut(true);
executorService = MoreExecutors.listeningDecorator(parentExecutor);
}
}
/**
* This method will block if a cache entry doesn't exist, and
* any subsequent requests for the same user will wait on this
* request to return. If a user already exists in the cache,
* and when the key expires, the first call to reload the key
* will block, but subsequent requests will return the old
* value until the blocking thread returns.
* If reloadGroupsInBackground is true, then the thread that
* needs to refresh an expired key will not block either. Instead
* it will return the old cache value and schedule a background
* refresh
* @param user key of cache
* @return List of groups belonging to user
* @throws IOException to prevent caching negative entries
*/
@Override
public List<String> load(String user) throws Exception {
LOG.debug("GroupCacheLoader - load.");
TraceScope scope = null;
Tracer tracer = Tracer.curThreadTracer();
if (tracer != null) {
scope = tracer.newScope("Groups#fetchGroupList");
scope.addKVAnnotation("user", user);
}
List<String> groups = null;
try {
groups = fetchGroupList(user);
} finally {
if (scope != null) {
scope.close();
}
}
if (groups.isEmpty()) {
if (isNegativeCacheEnabled()) {
negativeCache.add(user);
}
// We throw here to prevent Cache from retaining an empty group
throw noGroupsForUser(user);
}
// return immutable de-duped list
return Collections.unmodifiableList(
new ArrayList<>(new LinkedHashSet<>(groups)));
}
}
hdfs在使用ldap的时候, 用户的用户组信息是通过ldap查询获取的. 在检查用户权限的时候, 需要查询用户关联的所有用户组, 然后在ranger agent中判断用户和用户组是否有对应的操作权限.
package org.apache.hadoop.security;
/**
* An implementation of {@link GroupMappingServiceProvider} which
* connects directly to an LDAP server for determining group membership.
*
* This provider should be used only if it is necessary to map users to
* groups that reside exclusively in an Active Directory or LDAP installation.
* The common case for a Hadoop installation will be that LDAP users and groups
* materialized on the Unix servers, and for an installation like that,
* ShellBasedUnixGroupsMapping is preferred. However, in cases where
* those users and groups aren't materialized in Unix, but need to be used for
* access control, this class may be used to communicate directly with the LDAP
* server.
*
* It is important to note that resolving group mappings will incur network
* traffic, and may cause degraded performance, although user-group mappings
* will be cached via the infrastructure provided by {@link Groups}.
*
* This implementation does not support configurable search limits. If a filter
* is used for searching users or groups which returns more results than are
* allowed by the server, an exception will be thrown.
*
* The implementation attempts to resolve group hierarchies,
* to a configurable limit.
* If the limit is 0, in order to be considered a member of a group,
* the user must be an explicit member in LDAP. Otherwise, it will traverse the
* group hierarchy n levels up.
*/
@InterfaceAudience.LimitedPrivate({"HDFS", "MapReduce"})
@InterfaceStability.Evolving
public class LdapGroupsMapping
implements GroupMappingServiceProvider, Configurable {
/**
* Returns list of groups for a user.
*
* The LdapCtx which underlies the DirContext object is not thread-safe, so
* we need to block around this whole method. The caching infrastructure will
* ensure that performance stays in an acceptable range.
*
* @param user get groups for this user
* @return list of groups for a given user
*/
@Override
public synchronized List<String> getGroups(String user) {
/*
* Normal garbage collection takes care of removing Context instances when
* they are no longer in use. Connections used by Context instances being
* garbage collected will be closed automatically. So in case connection is
* closed and gets CommunicationException, retry some times with new new
* DirContext/connection.
*/
// Tracks the number of attempts made using the same LDAP server
int atemptsBeforeFailover = 1;
for (int attempt = 1; attempt <= numAttempts; attempt++,
atemptsBeforeFailover++) {
try {
return doGetGroups(user, groupHierarchyLevels);
} catch (NamingException e) {
LOG.warn("Failed to get groups for user {} (attempt={}/{}) using {}. " +
"Exception: ", user, attempt, numAttempts, currentLdapUrl, e);
LOG.trace("TRACE", e);
if (failover(atemptsBeforeFailover, numAttemptsBeforeFailover)) {
atemptsBeforeFailover = 0;
}
}
// Reset ctx so that new DirContext can be created with new connection
this.ctx = null;
}
return Collections.emptyList();
}
/**
* Perform LDAP queries to get group names of a user.
*
* Perform the first LDAP query to get the user object using the user's name.
* If one-query is enabled, retrieve the group names from the user object.
* If one-query is disabled, or if it failed, perform the second query to
* get the groups.
*
* @param user user name
* @return a list of group names for the user. If the user can not be found,
* return an empty string array.
* @throws NamingException if unable to get group names
*/
List<String> doGetGroups(String user, int goUpHierarchy)
throws NamingException {
DirContext c = getDirContext();
// Search for the user. We'll only ever need to look at the first result
NamingEnumeration<SearchResult> results = c.search(userbaseDN,
userSearchFilter, new Object[]{user}, SEARCH_CONTROLS);
// return empty list if the user can not be found.
if (!results.hasMoreElements()) {
LOG.debug("doGetGroups({}) returned no groups because the " +
"user is not found.", user);
return new ArrayList<>();
}
SearchResult result = results.nextElement();
List<String> groups = null;
if (useOneQuery) {
try {
/**
* For Active Directory servers, the user object has an attribute
* 'memberOf' that represents the DNs of group objects to which the
* user belongs. So the second query may be skipped.
*/
Attribute groupDNAttr = result.getAttributes().get(memberOfAttr);
if (groupDNAttr == null) {
throw new NamingException("The user object does not have '" +
memberOfAttr + "' attribute." +
"Returned user object: " + result.toString());
}
groups = new ArrayList<>();
NamingEnumeration groupEnumeration = groupDNAttr.getAll();
while (groupEnumeration.hasMore()) {
String groupDN = groupEnumeration.next().toString();
groups.add(getRelativeDistinguishedName(groupDN));
}
} catch (NamingException e) {
// If the first lookup failed, fall back to the typical scenario.
LOG.info("Failed to get groups from the first lookup. Initiating " +
"the second LDAP query using the user's DN.", e);
}
}
if (groups == null || groups.isEmpty() || goUpHierarchy > 0) {
groups = lookupGroup(result, c, goUpHierarchy);
}
LOG.debug("doGetGroups({}) returned {}", user, groups);
return groups;
}
}
ranger 用户组缓存
ranger代码里, 其实也提供了用户和用户组的缓存, 目前还不知道这个缓存用在哪里.
查看ranger plugin的check permission流程, 其实并没有读取用户/用户组映射关系的操作.
hdfs在判断用户权限的时候, hdfs ranger plugin并不会使用ranger这个用户组缓存的信息, 而是由hdfs系统自行从ldap中获取用户的用户组信息.
public class RangerUserStoreEnricher extends RangerAbstractContextEnricher {
private static final Logger LOG = LoggerFactory.getLogger(RangerUserStoreEnricher.class);
private static final Logger PERF_SET_USERSTORE_LOG = RangerPerfTracer.getPerfLogger("userstoreenricher.setuserstore");
public static final String USERSTORE_REFRESHER_POLLINGINTERVAL_OPTION = "userStoreRefresherPollingInterval";
public static final String USERSTORE_RETRIEVER_CLASSNAME_OPTION = "userStoreRetrieverClassName";
private RangerUserStoreRefresher userStoreRefresher;
private RangerUserStoreRetriever userStoreRetriever;
private RangerUserStore rangerUserStore;
private boolean disableCacheIfServiceNotFound = true;
private boolean dedupStrings = true;
private final BlockingQueue<DownloadTrigger> userStoreDownloadQueue = new LinkedBlockingQueue<>();
private Timer userStoreDownloadTimer;
@Override
public void init() {
if (LOG.isDebugEnabled()) {
LOG.debug("==> RangerUserStoreEnricher.init()");
}
super.init();
String propertyPrefix = getPropertyPrefix();
String userStoreRetrieverClassName = getOption(USERSTORE_RETRIEVER_CLASSNAME_OPTION);
long pollingIntervalMs = getLongOption(USERSTORE_REFRESHER_POLLINGINTERVAL_OPTION, 3600 * 1000);
dedupStrings = getBooleanConfig(propertyPrefix + ".dedup.strings", true);
if (StringUtils.isNotBlank(userStoreRetrieverClassName)) {
try {
@SuppressWarnings("unchecked")
Class<RangerUserStoreRetriever> userStoreRetriverClass = (Class<RangerUserStoreRetriever>) Class.forName(userStoreRetrieverClassName);
userStoreRetriever = userStoreRetriverClass.newInstance();
} catch (ClassNotFoundException exception) {
LOG.error("Class " + userStoreRetrieverClassName + " not found, exception=" + exception);
} catch (ClassCastException exception) {
LOG.error("Class " + userStoreRetrieverClassName + " is not a type of RangerUserStoreRetriever, exception=" + exception);
} catch (IllegalAccessException exception) {
LOG.error("Class " + userStoreRetrieverClassName + " illegally accessed, exception=" + exception);
} catch (InstantiationException exception) {
LOG.error("Class " + userStoreRetrieverClassName + " could not be instantiated, exception=" + exception);
}
if (userStoreRetriever != null) {
disableCacheIfServiceNotFound = getBooleanConfig(propertyPrefix + ".disable.cache.if.servicenotfound", true);
String cacheDir = getConfig(propertyPrefix + ".policy.cache.dir", null);
String cacheFilename = String.format("%s_%s_userstore.json", appId, serviceName);
cacheFilename = cacheFilename.replace(File.separatorChar, '_');
cacheFilename = cacheFilename.replace(File.pathSeparatorChar, '_');
String cacheFile = cacheDir == null ? null : (cacheDir + File.separator + cacheFilename);
userStoreRetriever.setServiceName(serviceName);
userStoreRetriever.setServiceDef(serviceDef);
userStoreRetriever.setAppId(appId);
userStoreRetriever.setPluginConfig(getPluginConfig());
userStoreRetriever.setPluginContext(getPluginContext());
userStoreRetriever.init(enricherDef.getEnricherOptions());
userStoreRefresher = new RangerUserStoreRefresher(userStoreRetriever, this, null, -1L, userStoreDownloadQueue, cacheFile);
LOG.info("Created Thread(RangerUserStoreRefresher(" + getName() + ")");
try {
userStoreRefresher.populateUserStoreInfo();
} catch (Throwable exception) {
LOG.error("Exception when retrieving userstore information for this enricher", exception);
}
userStoreRefresher.setDaemon(true);
userStoreRefresher.startRefresher();
userStoreDownloadTimer = new Timer("userStoreDownloadTimer", true);
try {
userStoreDownloadTimer.schedule(new DownloaderTask(userStoreDownloadQueue), pollingIntervalMs, pollingIntervalMs);
if (LOG.isDebugEnabled()) {
LOG.debug("Scheduled userStoreDownloadRefresher to download userstore every " + pollingIntervalMs + " milliseconds");
}
} catch (IllegalStateException exception) {
LOG.error("Error scheduling userStoreDownloadTimer:", exception);
LOG.error("*** UserStore information will NOT be downloaded every " + pollingIntervalMs + " milliseconds ***");
userStoreDownloadTimer = null;
}
}
} else {
LOG.error("No value specified for " + USERSTORE_RETRIEVER_CLASSNAME_OPTION + " in the RangerUserStoreEnricher options");
}
if (LOG.isDebugEnabled()) {
LOG.debug("<== RangerUserStoreEnricher.init()");
}
}
}
ranger会调用user download的接口, 然后把用户/用户组信息缓存到磁盘中.
public class RangerUserStoreRefresher extends Thread {
@Override
public void run() {
if (LOG.isDebugEnabled()) {
LOG.debug("==> RangerUserStoreRefresher().run()");
}
while (true) {
DownloadTrigger trigger = null;
try {
RangerPerfTracer perf = null;
if(RangerPerfTracer.isPerfTraceEnabled(PERF_REFRESHER_INIT_LOG)) {
perf = RangerPerfTracer.getPerfTracer(PERF_REFRESHER_INIT_LOG,
"RangerUserStoreRefresher.run(lastKnownVersion=" + lastKnownVersion + ")");
}
trigger = userStoreDownloadQueue.take();
populateUserStoreInfo();
RangerPerfTracer.log(perf);
} catch (InterruptedException excp) {
LOG.debug("RangerUserStoreRefresher().run() : interrupted! Exiting thread", excp);
break;
} finally {
if (trigger != null) {
trigger.signalCompletion();
}
}
}
if (LOG.isDebugEnabled()) {
LOG.debug("<== RangerUserStoreRefresher().run()");
}
}
public RangerUserStore populateUserStoreInfo() throws InterruptedException {
RangerUserStore rangerUserStore = null;
if (userStoreEnricher != null && userStoreRetriever != null) {
try {
rangerUserStore = userStoreRetriever.retrieveUserStoreInfo(lastKnownVersion, lastActivationTimeInMillis);
if (rangerUserStore == null) {
if (!hasProvidedUserStoreToReceiver) {
rangerUserStore = loadFromCache();
}
}
if (rangerUserStore != null) {
userStoreEnricher.setRangerUserStore(rangerUserStore);
if (rangerUserStore.getUserStoreVersion() != -1L) {
saveToCache(rangerUserStore);
}
LOG.info("RangerUserStoreRefresher.populateUserStoreInfo() - Updated userstore-cache to new version, lastKnownVersion=" + lastKnownVersion + "; newVersion="
+ (rangerUserStore.getUserStoreVersion() == null ? -1L : rangerUserStore.getUserStoreVersion()));
hasProvidedUserStoreToReceiver = true;
lastKnownVersion = rangerUserStore.getUserStoreVersion() == null ? -1L : rangerUserStore.getUserStoreVersion();
setLastActivationTimeInMillis(System.currentTimeMillis());
} else {
if (LOG.isDebugEnabled()) {
LOG.debug("RangerUserStoreRefresher.populateUserStoreInfo() - No need to update userstore-cache. lastKnownVersion=" + lastKnownVersion);
}
}
} catch (RangerServiceNotFoundException snfe) {
LOG.error("Caught ServiceNotFound exception :", snfe);
// Need to clean up local userstore cache
if (userStoreEnricher.isDisableCacheIfServiceNotFound()) {
disableCache();
setLastActivationTimeInMillis(System.currentTimeMillis());
lastKnownVersion = -1L;
}
} catch (InterruptedException interruptedException) {
throw interruptedException;
} catch (Exception e) {
LOG.error("Encountered unexpected exception. Ignoring", e);
}
} else if (rangerRESTClient != null) {
if (LOG.isDebugEnabled()) {
LOG.debug("RangerUserStoreRefresher.populateUserStoreInfo() for Ranger Raz");
}
try {
rangerUserStore = retrieveUserStoreInfo();
if (rangerUserStore == null) {
if (!hasProvidedUserStoreToReceiver) {
rangerUserStore = loadFromCache();
}
}
if (rangerUserStore != null) {
if (rangerUserStore.getUserStoreVersion() != -1L) {
saveToCache(rangerUserStore);
}
LOG.info("RangerUserStoreRefresher.populateUserStoreInfo() - Updated userstore-cache for raz to new version, lastKnownVersion=" + lastKnownVersion + "; newVersion="
+ (rangerUserStore.getUserStoreVersion() == null ? -1L : rangerUserStore.getUserStoreVersion()));
hasProvidedUserStoreToReceiver = true;
lastKnownVersion = rangerUserStore.getUserStoreVersion() == null ? -1L : rangerUserStore.getUserStoreVersion();
setLastActivationTimeInMillis(System.currentTimeMillis());
} else {
if (LOG.isDebugEnabled()) {
LOG.debug("RangerUserStoreRefresher.populateUserStoreInfo() - No need to update userstore-cache for raz. lastKnownVersion=" + lastKnownVersion);
}
}
}catch (InterruptedException interruptedException) {
throw interruptedException;
} catch (Exception e) {
LOG.error("Encountered unexpected exception. Ignoring", e);
}
}
else {
LOG.error("RangerUserStoreRefresher.populateUserStoreInfo() - no userstore receiver to update userstore-cache");
}
return rangerUserStore;
}
}
在ranger base plugin的流程中, 先下载policy和role进行缓存, 然后查看是否有enricher, 然后再单独下载user/groups的映射缓存.
public RangerBasePlugin(RangerPluginConfig pluginConfig, ServicePolicies policies, ServiceTags tags, RangerRoles roles, RangerUserStore userStore) {
this(pluginConfig);
init();
setPolicies(policies);
setRoles(roles);
if (tags != null) {
RangerTagEnricher tagEnricher = getTagEnricher();
if (tagEnricher != null) {
tagEnricher.setServiceTags(tags);
} else {
LOG.warn("RangerBasePlugin(tagsVersion=" + tags.getTagVersion() + "): no tag enricher found. Plugin will not enforce tag-based policies");
}
}
if (userStore != null) {
RangerUserStoreEnricher userStoreEnricher = getUserStoreEnricher();
if (userStoreEnricher != null) {
userStoreEnricher.setRangerUserStore(userStore);
} else {
LOG.warn("RangerBasePlugin(userStoreVersion=" + userStore.getUserStoreVersion() + "): no userstore enricher found. Plugin will not enforce user/group attribute-based policies");
}
}
}
hdfs 通过ranger check permission
hdfs传递过来的check permission信息
@Override
public void checkPermission(String fsOwner, String superGroup, UserGroupInformation ugi,
INodeAttributes[] inodeAttrs, INode[] inodes, byte[][] pathByNameArr,
int snapshotId, String path, int ancestorIndex, boolean doCheckOwner,
FsAction ancestorAccess, FsAction parentAccess, FsAction access,
FsAction subAccess, boolean ignoreEmptyDir) throws AccessControlException {
checkRangerPermission(fsOwner, superGroup, ugi, inodeAttrs, inodes, pathByNameArr, snapshotId, path, ancestorIndex, doCheckOwner, ancestorAccess, parentAccess, access, subAccess, ignoreEmptyDir, null, null);
}
private void checkRangerPermission(String fsOwner, String superGroup, UserGroupInformation ugi,
INodeAttributes[] inodeAttrs, INode[] inodes, byte[][] pathByNameArr,
int snapshotId, String path, int ancestorIndex, boolean doCheckOwner,
FsAction ancestorAccess, FsAction parentAccess, FsAction access,
FsAction subAccess, boolean ignoreEmptyDir, String operationName, CallerContext callerContext ) throws AccessControlException {
AuthzStatus authzStatus = AuthzStatus.NOT_DETERMINED;
String resourcePath = path;
AuthzContext context = new AuthzContext(rangerPlugin, ugi, operationName, access == null && parentAccess == null && ancestorAccess == null && subAccess == null);
if(LOG.isDebugEnabled()) {
LOG.debug("==> RangerAccessControlEnforcer.checkPermission("
+ "fsOwner=" + fsOwner + "; superGroup=" + superGroup + ", inodesCount=" + (inodes != null ? inodes.length : 0)
+ ", snapshotId=" + snapshotId + ", user=" + context.user + ", provided-path=" + path + ", ancestorIndex=" + ancestorIndex
+ ", doCheckOwner="+ doCheckOwner + ", ancestorAccess=" + ancestorAccess + ", parentAccess=" + parentAccess
+ ", access=" + access + ", subAccess=" + subAccess + ", ignoreEmptyDir=" + ignoreEmptyDir + ", operationName=" + operationName
+ ", callerContext=" + callerContext +")");
}
...
}
ranger通过gui.getGroupNames直接获取到用户关联的用户组信息, 然后通过用户和用户组信息判断是否有权限. ranger agent只需要解析json判断即可, 不再需要到后台数据库额外查询用户有哪些用户组信息.
class AuthzContext {
public final RangerHdfsPlugin plugin;
public final String user;
public final Set<String> userGroups;
public final String operationName;
public final boolean isTraverseOnlyCheck;
public RangerHdfsAuditHandler auditHandler = null;
private RangerAccessResult lastResult = null;
public AuthzContext(RangerHdfsPlugin plugin, UserGroupInformation ugi, String operationName, boolean isTraverseOnlyCheck) {
this.plugin = plugin;
this.user = ugi != null ? ugi.getShortUserName() : null;
this.userGroups = ugi != null ? Sets.newHashSet(ugi.getGroupNames()) : null;
this.operationName = operationName;
this.isTraverseOnlyCheck = isTraverseOnlyCheck;
}
public void saveResult(RangerAccessResult result) {
if (result != null) {
this.lastResult = result;
}
}
public RangerAccessResult getLastResult() {
return lastResult;
}
}
private AuthzStatus isAccessAllowed(INode inode, INodeAttributes inodeAttribs, String path, FsAction access, AuthzContext context) {
AuthzStatus ret = null;
String pathOwner = inodeAttribs != null ? inodeAttribs.getUserName() : null;
if(pathOwner == null && inode != null) {
pathOwner = inode.getUserName();
}
if (RangerHadoopConstants.HDFS_ROOT_FOLDER_PATH_ALT.equals(path)) {
path = HDFS_ROOT_FOLDER_PATH;
}
if(LOG.isDebugEnabled()) {
LOG.debug("==> RangerAccessControlEnforcer.isAccessAllowed(" + path + ", " + access + ", " + context.user + ")");
}
Set<String> accessTypes = access2ActionListMapper.get(access);
if(accessTypes == null) {
LOG.warn("RangerAccessControlEnforcer.isAccessAllowed(" + path + ", " + access + ", " + context.user + "): no Ranger accessType found for " + access);
accessTypes = access2ActionListMapper.get(FsAction.NONE);
}
if (accessTypes.size() > 0) {
RangerHdfsAccessRequest request = new RangerHdfsAccessRequest(inode, path, pathOwner, access, accessTypes.iterator().next(), context.operationName, context.user, context.userGroups);
if (accessTypes.size() > 1) {
RangerAccessRequestUtil.setAllRequestedAccessTypes(request.getContext(), accessTypes);
}
RangerAccessResult result = context.plugin.isAccessAllowed(request, context.auditHandler);
context.saveResult(result);
if (result == null || !result.getIsAccessDetermined()) {
ret = AuthzStatus.NOT_DETERMINED;
} else if (!result.getIsAllowed()) { // explicit deny
ret = AuthzStatus.DENY;
} else { // allowed
ret = AuthzStatus.ALLOW;
}
}
if(ret == null) {
ret = AuthzStatus.NOT_DETERMINED;
}
if(LOG.isDebugEnabled()) {
LOG.debug("<== RangerAccessControlEnforcer.isAccessAllowed(" + path + ", " + access + ", " + context.user + "): " + ret);
}
return ret;
}
public interface RangerAccessRequest {
RangerAccessResource getResource();
String getAccessType();
boolean isAccessTypeAny();
boolean isAccessTypeDelegatedAdmin();
String getUser();
Set<String> getUserGroups();
Set<String> getUserRoles();
Date getAccessTime();
String getClientIPAddress();
String getRemoteIPAddress();
List<String> getForwardedAddresses();
String getClientType();
String getAction();
String getRequestData();
String getSessionId();
String getClusterName();
String getClusterType();
Map<String, Object> getContext();
RangerAccessRequest getReadOnlyCopy();
ResourceMatchingScope getResourceMatchingScope();
default Map<String, ResourceElementMatchingScope> getResourceElementMatchingScopes() {
return Collections.emptyMap();
}
enum ResourceMatchingScope { SELF, SELF_OR_DESCENDANTS }
enum ResourceElementMatchingScope { SELF, SELF_OR_CHILD, SELF_OR_PREFIX }
enum ResourceElementMatchType { NONE, SELF, CHILD, PREFIX }
}
linux 一个用户支持多个用户组
Can a Linux user belong to more than one group?
Users are organized into groups, every users is in at least one group, and may be in other groups. Group membership gives you special access to files and directories which are permitted to that group.
For example, you can add the user username to groups group1 and group2 with the following usermod command:
usermod -a -G group1,group2 username